Highlighted notes of article while studying Concurrent Data Structures, CSE:
Real-world Concurrency
Bryan Cantrill and Jeff Bonwick,
Sun Microsystems
ACM Queue, September 2008 https://doi.org/10.1145/1454456.1454462
BRYAN CANTRILL is a Distinguished Engineer at Sun Microsystems, where he has worked on concurrent systems since coming to Sun to work with Jeff Bonwick on Solaris performance in 1996. Along with colleagues Mike Shapiro and Adam Leventhal, Cantrill developed DTrace, a facility for dynamic instrumentation of production systems that was directly inspired by his frustration in understanding the behavior of concurrent systems.
JEFF BONWICK is a Fellow at Sun Microsystems, where he has worked on concurrent systems since 1990. He is best known for inventing and leading the development of Sun’s ZFS (Zettabyte File System), but prior to this he was known for having written (or rather, rewritten) many of the most parallel subsystems in the Solaris kernel, including the syn- chronization primitives, the kernel memory allocator, and the thread-blocking mechanism.
https://dl.acm.org/doi/10.1145/1454456.1454462
A computer cluster is a group of loosely connected computers that work together as a single computer. Clusters improve performance and reliability over a single computer and are more cost-effective than a single computer of comparable speed or reliability. There are several types of clusters, including high-availability clusters which improve service availability through redundancy, load-balancing clusters which distribute workload across backend servers, and high-performance clusters which increase performance by splitting computational tasks across nodes.
Data Structures in the Multicore Age : NotesSubhajit Sahu
The document discusses the challenges of designing concurrent data structures for multicore processors. It begins by explaining Amdahl's Law, which states that the speedup gained from parallelization is limited by the sequential fraction of a program. For mainstream applications, the sequential fraction often involves coordinating concurrent access to shared data structures.
It then presents an example of designing a concurrent stack. It starts with a simple lock-based stack protected by a single lock. While this guarantees linearizability, it suffers from poor scalability due to the centralized locking bottleneck. It also relies on strong scheduling assumptions. The document indicates that future concurrent data structures will need to be more distributed and relaxed in their consistency requirements to achieve scalability on multicore
This document provides a tutorial on multi-core CPUs, computer clusters, and grid computing for economists. It begins by discussing trends in microprocessor development such as increasing transistor counts and clock speeds. Future improvements will come from multi-core CPUs rather than increased clock speed. It also discusses increases in network speeds that enable computer clusters and grids. The document then provides suggestions for optimizing code performance on a single CPU before parallelization. This includes minimizing branches, inlining subroutines, and using high-performance libraries. The rest of the document discusses programming multi-core CPUs, clusters, and grids.
Comparing reinforcement learning and access points with rowelijcseit
Due to the fast development of the Cloud Computing technologies, the rapid increase of cloud services
are became very remarkable. The fact of integration of these services with many of the modern
enterprises cannot be ignored. Microsoft, Google, Amazon, SalesForce.com and the other leading IT
companies are entered the field of developing these services. This paper presents a comprehensive survey
of current cloud services, which are divided into eleven categories. Also the most famous providers for
these services are listed. Finally, the Deployment Models of Cloud Computing are mentioned and briefly
discussed.
High Performance & High Throughput Computing - EUDAT Summer School (Giuseppe ...EUDAT
Giuseppe will present the differences between high-performance and high-throughput applications. High-throughput computing (HTC) refers to computations where individual tasks do not need to interact while running. It differs from High-performance (HPC) where frequent and rapid exchanges of intermediate results is required to perform the computations. HPC codes are based on tightly coupled MPI, OpenMP, GPGPU, and hybrid programs and require low latency interconnected nodes. HTC makes use of unreliable components distributing the work out to every node and collecting results at the end of all parallel tasks.
Visit: https://www.eudat.eu/eudat-summer-school
Introduction into the problems of developing parallel programsPVS-Studio
As developing parallel software is rather a difficult task at present, the questions of theoretical training of specialists and investigation of methodology of projecting such systems become very urgent. Within the framework of this article we provide historical and technical information preparing a programmer for gaining knowledge in the sphere of developing parallel computer systems.
This document discusses the history and evolution of supercomputer architectures from the 1960s to present. Early supercomputers relied on compact designs and local parallelism. Starting in the 1990s, massively parallel systems with thousands of processors became common. Modern supercomputers can use over 100,000 processors connected by fast interconnects and may utilize GPUs, computer clusters, or distributed computing networks to achieve petaflop-scale performance. Vector processing is also discussed as an important technique used in many historical supercomputers to improve performance.
This document provides an introduction to Unix for computer science students at Old Dominion University. It discusses the history and evolution of operating systems like Unix and how they differ from Windows. Specifically, it explains that Unix was designed for multiprogramming and multiprocessing environments using terminals for input/output when networking and remote access were common. In contrast, Windows emerged later for standalone personal computers with graphics interfaces. The document aims to help students understand why the CS department uses Unix and the fundamental design of the Unix operating system.
A computer cluster is a group of loosely connected computers that work together as a single computer. Clusters improve performance and reliability over a single computer and are more cost-effective than a single computer of comparable speed or reliability. There are several types of clusters, including high-availability clusters which improve service availability through redundancy, load-balancing clusters which distribute workload across backend servers, and high-performance clusters which increase performance by splitting computational tasks across nodes.
Data Structures in the Multicore Age : NotesSubhajit Sahu
The document discusses the challenges of designing concurrent data structures for multicore processors. It begins by explaining Amdahl's Law, which states that the speedup gained from parallelization is limited by the sequential fraction of a program. For mainstream applications, the sequential fraction often involves coordinating concurrent access to shared data structures.
It then presents an example of designing a concurrent stack. It starts with a simple lock-based stack protected by a single lock. While this guarantees linearizability, it suffers from poor scalability due to the centralized locking bottleneck. It also relies on strong scheduling assumptions. The document indicates that future concurrent data structures will need to be more distributed and relaxed in their consistency requirements to achieve scalability on multicore
This document provides a tutorial on multi-core CPUs, computer clusters, and grid computing for economists. It begins by discussing trends in microprocessor development such as increasing transistor counts and clock speeds. Future improvements will come from multi-core CPUs rather than increased clock speed. It also discusses increases in network speeds that enable computer clusters and grids. The document then provides suggestions for optimizing code performance on a single CPU before parallelization. This includes minimizing branches, inlining subroutines, and using high-performance libraries. The rest of the document discusses programming multi-core CPUs, clusters, and grids.
Comparing reinforcement learning and access points with rowelijcseit
Due to the fast development of the Cloud Computing technologies, the rapid increase of cloud services
are became very remarkable. The fact of integration of these services with many of the modern
enterprises cannot be ignored. Microsoft, Google, Amazon, SalesForce.com and the other leading IT
companies are entered the field of developing these services. This paper presents a comprehensive survey
of current cloud services, which are divided into eleven categories. Also the most famous providers for
these services are listed. Finally, the Deployment Models of Cloud Computing are mentioned and briefly
discussed.
High Performance & High Throughput Computing - EUDAT Summer School (Giuseppe ...EUDAT
Giuseppe will present the differences between high-performance and high-throughput applications. High-throughput computing (HTC) refers to computations where individual tasks do not need to interact while running. It differs from High-performance (HPC) where frequent and rapid exchanges of intermediate results is required to perform the computations. HPC codes are based on tightly coupled MPI, OpenMP, GPGPU, and hybrid programs and require low latency interconnected nodes. HTC makes use of unreliable components distributing the work out to every node and collecting results at the end of all parallel tasks.
Visit: https://www.eudat.eu/eudat-summer-school
Introduction into the problems of developing parallel programsPVS-Studio
As developing parallel software is rather a difficult task at present, the questions of theoretical training of specialists and investigation of methodology of projecting such systems become very urgent. Within the framework of this article we provide historical and technical information preparing a programmer for gaining knowledge in the sphere of developing parallel computer systems.
This document discusses the history and evolution of supercomputer architectures from the 1960s to present. Early supercomputers relied on compact designs and local parallelism. Starting in the 1990s, massively parallel systems with thousands of processors became common. Modern supercomputers can use over 100,000 processors connected by fast interconnects and may utilize GPUs, computer clusters, or distributed computing networks to achieve petaflop-scale performance. Vector processing is also discussed as an important technique used in many historical supercomputers to improve performance.
This document provides an introduction to Unix for computer science students at Old Dominion University. It discusses the history and evolution of operating systems like Unix and how they differ from Windows. Specifically, it explains that Unix was designed for multiprogramming and multiprocessing environments using terminals for input/output when networking and remote access were common. In contrast, Windows emerged later for standalone personal computers with graphics interfaces. The document aims to help students understand why the CS department uses Unix and the fundamental design of the Unix operating system.
Software and the Concurrency Revolution : NotesSubhajit Sahu
Highlighted notes of article while studying Concurrent Data Structures, CSE:
Software and the Concurrency Revolution
Herb Sutter
Software Architect, Microsoft
Software Development Consultant, www.gotw.ca/training
Herb Sutter is a prominent C++ expert. He is also a book author and was a columnist for Dr. Dobb's Journal. He joined Microsoft in 2002 as a platform evangelist for Visual C++ .NET, rising to lead software architect for C++/CLI.
Parallel computing is computing architecture paradigm ., in which processing required to solve a problem is done in more than one processor parallel way.
Concurrency and Parallelism, Asynchronous Programming, Network ProgrammingPrabu U
The presentation starts with concurrency and parallelism. Then the concepts of reactive programming is covered. Finally network programming is detailed
Tutorial on Parallel Computing and Message Passing Model - C1Marcirio Chaves
The document provides an overview of parallel computing concepts and programming models. It discusses parallel computing terminology like Flynn's taxonomy and parallel memory architectures like shared memory, distributed memory, and hybrid models. It also explains common parallel programming models including shared memory with threads, message passing with MPI, and data parallel models.
A grid computing system shares computer resources across a network to allow users to access large amounts of processing power, storage, and memory. It works by connecting computers together and sharing their resources through middleware software. A control node manages tasks and schedules work across the distributed system, making computer resources available like a powerful supercomputer. Standards and protocols are still being developed to improve compatibility between different grid computing networks.
This document provides an overview of computer clustering technologies. It discusses the history of computing clusters beginning with early networks like ARPANET in the 1960s and early commercial clustering products in the 1970s and 80s. It then categorizes and describes different types of clusters including high performance clusters, high availability clusters, load balancing clusters, database clusters, web server clusters, storage clusters, single system image clusters, and grid computing.
This document provides an introduction to parallel computing. It begins with definitions of parallel computing as using multiple compute resources simultaneously to solve problems. Popular parallel architectures include shared memory, where all processors can access a common memory, and distributed memory, where each processor has its own local memory and they communicate over a network. The document discusses key parallel computing concepts and terminology such as Flynn's taxonomy, parallel overhead, scalability, and memory models including uniform memory access (UMA), non-uniform memory access (NUMA), and distributed memory. It aims to provide background on parallel computing topics before examining how to parallelize different types of programs.
A computer cluster is a group of loosely coupled computers that work together closely to be viewed as a single computer. Clusters improve performance and reliability over a single computer and are typically more cost-effective. The document defines and provides examples of different types of clusters including high-availability, load-balancing, high-performance computing, and grid clusters. It also discusses cluster hardware, software, implementations, technologies, and history.
1. Multithreading models divide tasks between user-level threads and kernel-level threads.
2. The three common multithreading models are: many-to-one, one-to-one, and many-to-many.
3. The many-to-one model maps many user threads to a single kernel thread, the one-to-one model maps each user thread to its own kernel thread, and the many-to-many model maps multiple user threads to kernel threads in a varying ratio.
Operating Systems Unit One - Fourth Semester - EngineeringYogesh Santhan
The document provides an overview of the history and development of early operating systems from the 1950s to present day. It discusses the progression from simple batch processing systems to time-sharing mainframe systems to personal computer operating systems. Early operating systems had to be efficient due to limited computing resources, while modern operating systems focus on usability and providing a convenient environment for users.
Is Multicore Hardware For General-Purpose Parallel Processing Broken? : NotesSubhajit Sahu
Highlighted notes of article while studying Concurrent Data Structures, CSE:
Is Multicore Hardware For General-Purpose Parallel Processing Broken?
By Uzi Vishkin
Communications of the ACM, April 2014, Vol. 57 No. 4, Pages 35-39
10.1145/2580945
Vector processors operate on arrays of data (vectors) with single instructions, unlike scalar processors which operate on single data items. This allows vector processors to greatly improve performance for tasks involving large datasets like numerical simulation. Early vector supercomputers dominated from the 1970s-1990s, but conventional microprocessors now use SIMD instructions to provide a form of vector processing. Modern GPUs also use vector-like processing for graphics and general computing.
The document discusses parallel computing over the past 25 years and challenges for using multicore chips in the next decade. It aims to provide context to scale applications effectively to 32-1024 cores. Key challenges include expressing inherent application parallelism while enabling efficient mapping to hardware through programming models and runtime systems. Future work includes developing methods to restore lost parallelism information and tradeoffs between programming effort, generality and performance.
Distributed computing allows groups to accomplish tasks not feasible with supercomputers alone due to cost or time constraints. It breaks large problems into smaller units that can be processed in parallel by multiple networked computers. Properly implemented distributed computing complements processing and networking, but malicious uses can launch brute force attacks too powerful for normal defenses. Distributed computing requires monitoring to prevent misuse while preserving legitimate applications.
Parallel programs to multi-processor computers!PVS-Studio
Parallel programs can take advantage of multi-processor computers by dividing tasks into independent subtasks that can be solved simultaneously. This approach is known as parallel programming. The document provides an introduction to parallel programming for beginners, including an overview of processes and threads for parallel execution. It also discusses techniques for synchronizing parallel tasks like mutexes, semaphores, and critical sections. The document explains how parallel programs can be run on both multi-processor computers and computer clusters connected by a network.
The document discusses the development of a pure peer-to-peer computing system using socket programming. It aims to facilitate parallel computation of complex tasks by distributing work across available peers in a network. This allows heavier calculations to be performed faster by utilizing otherwise idle processing resources. The system is designed to remove scalability and security issues while managing tasks through administrator, query manager, task dispatcher, and processor groups. A literature review found that decentralized peer-to-peer systems like Freenet and GNUtella provide benefits like failure tolerance, efficiency and cost effectiveness.
Computer operating system and network modelrEjInBhandari
This document provides an overview of operating systems and network topologies. It discusses the history and evolution of operating systems from the 1940s to present. Key developments include the introduction of batch processing systems in the 1950s, multi-tasking systems in the 1960s, and graphical user interfaces in the 1980s. The document also defines different types of operating systems such as single-user, multi-user, real-time, and time-sharing systems. Additionally, it describes various network topologies like bus, star, ring, mesh and hybrid along with their advantages and disadvantages.
The document provides information about the Spartan HPC system at the University of Melbourne, including:
- Spartan is the University of Melbourne's general purpose high performance computing system.
- The document outlines logging into Spartan, using environment modules and job submission, and introduces parallel programming with OpenMP and MPI.
- Spartan has been recognized internationally as a model HPC-cloud hybrid system and has supported over 150 research papers.
The document introduces Parallel Pixie Dust (PPD), a cross-platform thread library that aims to guarantee deadlock-free and race-condition free schedules that are optimal. It discusses the need for multiple threads due to factors like the memory wall. Current threading models are problematic because testing and debugging threaded code is difficult. PPD uses futures and thread pools to simulate data flow and generate tree-like thread schedules. It provides parallel versions of functions and thread-safe containers to enable multi-threaded standard library algorithms. The goal is to make writing correct multi-threaded programs easier.
About TrueTime, Spanner, Clock synchronization, CAP theorem, Two-phase lockin...Subhajit Sahu
TrueTime is a service that enables the use of globally synchronized clocks, with bounded error. It returns a time interval that is guaranteed to contain the clock’s actual time for some time during the call’s execution. If two intervals do not overlap, then we know calls were definitely ordered in real time. In general, synchronized clocks can be used to avoid communication in a distributed system.
The underlying source of time is a combination of GPS receivers and atomic clocks. As there are “time masters” in every datacenter (redundantly), it is likely that both sides of a partition would continue to enjoy accurate time. Individual nodes however need network connectivity to the masters, and without it their clocks will drift. Thus, during a partition their intervals slowly grow wider over time, based on bounds on the rate of local clock drift. Operations depending on TrueTime, such as Paxos leader election or transaction commits, thus have to wait a little longer, but the operation still completes (assuming the 2PC and quorum communication are working).
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...Subhajit Sahu
Abstract — Levelwise PageRank is an alternative method of PageRank computation which decomposes the input graph into a directed acyclic block-graph of strongly connected components, and processes them in topological order, one level at a time. This enables calculation for ranks in a distributed fashion without per-iteration communication, unlike the standard method where all vertices are processed in each iteration. It however comes with a precondition of the absence of dead ends in the input graph. Here, the native non-distributed performance of Levelwise PageRank was compared against Monolithic PageRank on a CPU as well as a GPU. To ensure a fair comparison, Monolithic PageRank was also performed on a graph where vertices were split by components. Results indicate that Levelwise PageRank is about as fast as Monolithic PageRank on the CPU, but quite a bit slower on the GPU. Slowdown on the GPU is likely caused by a large submission of small workloads, and expected to be non-issue when the computation is performed on massive graphs.
Software and the Concurrency Revolution : NotesSubhajit Sahu
Highlighted notes of article while studying Concurrent Data Structures, CSE:
Software and the Concurrency Revolution
Herb Sutter
Software Architect, Microsoft
Software Development Consultant, www.gotw.ca/training
Herb Sutter is a prominent C++ expert. He is also a book author and was a columnist for Dr. Dobb's Journal. He joined Microsoft in 2002 as a platform evangelist for Visual C++ .NET, rising to lead software architect for C++/CLI.
Parallel computing is computing architecture paradigm ., in which processing required to solve a problem is done in more than one processor parallel way.
Concurrency and Parallelism, Asynchronous Programming, Network ProgrammingPrabu U
The presentation starts with concurrency and parallelism. Then the concepts of reactive programming is covered. Finally network programming is detailed
Tutorial on Parallel Computing and Message Passing Model - C1Marcirio Chaves
The document provides an overview of parallel computing concepts and programming models. It discusses parallel computing terminology like Flynn's taxonomy and parallel memory architectures like shared memory, distributed memory, and hybrid models. It also explains common parallel programming models including shared memory with threads, message passing with MPI, and data parallel models.
A grid computing system shares computer resources across a network to allow users to access large amounts of processing power, storage, and memory. It works by connecting computers together and sharing their resources through middleware software. A control node manages tasks and schedules work across the distributed system, making computer resources available like a powerful supercomputer. Standards and protocols are still being developed to improve compatibility between different grid computing networks.
This document provides an overview of computer clustering technologies. It discusses the history of computing clusters beginning with early networks like ARPANET in the 1960s and early commercial clustering products in the 1970s and 80s. It then categorizes and describes different types of clusters including high performance clusters, high availability clusters, load balancing clusters, database clusters, web server clusters, storage clusters, single system image clusters, and grid computing.
This document provides an introduction to parallel computing. It begins with definitions of parallel computing as using multiple compute resources simultaneously to solve problems. Popular parallel architectures include shared memory, where all processors can access a common memory, and distributed memory, where each processor has its own local memory and they communicate over a network. The document discusses key parallel computing concepts and terminology such as Flynn's taxonomy, parallel overhead, scalability, and memory models including uniform memory access (UMA), non-uniform memory access (NUMA), and distributed memory. It aims to provide background on parallel computing topics before examining how to parallelize different types of programs.
A computer cluster is a group of loosely coupled computers that work together closely to be viewed as a single computer. Clusters improve performance and reliability over a single computer and are typically more cost-effective. The document defines and provides examples of different types of clusters including high-availability, load-balancing, high-performance computing, and grid clusters. It also discusses cluster hardware, software, implementations, technologies, and history.
1. Multithreading models divide tasks between user-level threads and kernel-level threads.
2. The three common multithreading models are: many-to-one, one-to-one, and many-to-many.
3. The many-to-one model maps many user threads to a single kernel thread, the one-to-one model maps each user thread to its own kernel thread, and the many-to-many model maps multiple user threads to kernel threads in a varying ratio.
Operating Systems Unit One - Fourth Semester - EngineeringYogesh Santhan
The document provides an overview of the history and development of early operating systems from the 1950s to present day. It discusses the progression from simple batch processing systems to time-sharing mainframe systems to personal computer operating systems. Early operating systems had to be efficient due to limited computing resources, while modern operating systems focus on usability and providing a convenient environment for users.
Is Multicore Hardware For General-Purpose Parallel Processing Broken? : NotesSubhajit Sahu
Highlighted notes of article while studying Concurrent Data Structures, CSE:
Is Multicore Hardware For General-Purpose Parallel Processing Broken?
By Uzi Vishkin
Communications of the ACM, April 2014, Vol. 57 No. 4, Pages 35-39
10.1145/2580945
Vector processors operate on arrays of data (vectors) with single instructions, unlike scalar processors which operate on single data items. This allows vector processors to greatly improve performance for tasks involving large datasets like numerical simulation. Early vector supercomputers dominated from the 1970s-1990s, but conventional microprocessors now use SIMD instructions to provide a form of vector processing. Modern GPUs also use vector-like processing for graphics and general computing.
The document discusses parallel computing over the past 25 years and challenges for using multicore chips in the next decade. It aims to provide context to scale applications effectively to 32-1024 cores. Key challenges include expressing inherent application parallelism while enabling efficient mapping to hardware through programming models and runtime systems. Future work includes developing methods to restore lost parallelism information and tradeoffs between programming effort, generality and performance.
Distributed computing allows groups to accomplish tasks not feasible with supercomputers alone due to cost or time constraints. It breaks large problems into smaller units that can be processed in parallel by multiple networked computers. Properly implemented distributed computing complements processing and networking, but malicious uses can launch brute force attacks too powerful for normal defenses. Distributed computing requires monitoring to prevent misuse while preserving legitimate applications.
Parallel programs to multi-processor computers!PVS-Studio
Parallel programs can take advantage of multi-processor computers by dividing tasks into independent subtasks that can be solved simultaneously. This approach is known as parallel programming. The document provides an introduction to parallel programming for beginners, including an overview of processes and threads for parallel execution. It also discusses techniques for synchronizing parallel tasks like mutexes, semaphores, and critical sections. The document explains how parallel programs can be run on both multi-processor computers and computer clusters connected by a network.
The document discusses the development of a pure peer-to-peer computing system using socket programming. It aims to facilitate parallel computation of complex tasks by distributing work across available peers in a network. This allows heavier calculations to be performed faster by utilizing otherwise idle processing resources. The system is designed to remove scalability and security issues while managing tasks through administrator, query manager, task dispatcher, and processor groups. A literature review found that decentralized peer-to-peer systems like Freenet and GNUtella provide benefits like failure tolerance, efficiency and cost effectiveness.
Computer operating system and network modelrEjInBhandari
This document provides an overview of operating systems and network topologies. It discusses the history and evolution of operating systems from the 1940s to present. Key developments include the introduction of batch processing systems in the 1950s, multi-tasking systems in the 1960s, and graphical user interfaces in the 1980s. The document also defines different types of operating systems such as single-user, multi-user, real-time, and time-sharing systems. Additionally, it describes various network topologies like bus, star, ring, mesh and hybrid along with their advantages and disadvantages.
The document provides information about the Spartan HPC system at the University of Melbourne, including:
- Spartan is the University of Melbourne's general purpose high performance computing system.
- The document outlines logging into Spartan, using environment modules and job submission, and introduces parallel programming with OpenMP and MPI.
- Spartan has been recognized internationally as a model HPC-cloud hybrid system and has supported over 150 research papers.
The document introduces Parallel Pixie Dust (PPD), a cross-platform thread library that aims to guarantee deadlock-free and race-condition free schedules that are optimal. It discusses the need for multiple threads due to factors like the memory wall. Current threading models are problematic because testing and debugging threaded code is difficult. PPD uses futures and thread pools to simulate data flow and generate tree-like thread schedules. It provides parallel versions of functions and thread-safe containers to enable multi-threaded standard library algorithms. The goal is to make writing correct multi-threaded programs easier.
About TrueTime, Spanner, Clock synchronization, CAP theorem, Two-phase lockin...Subhajit Sahu
TrueTime is a service that enables the use of globally synchronized clocks, with bounded error. It returns a time interval that is guaranteed to contain the clock’s actual time for some time during the call’s execution. If two intervals do not overlap, then we know calls were definitely ordered in real time. In general, synchronized clocks can be used to avoid communication in a distributed system.
The underlying source of time is a combination of GPS receivers and atomic clocks. As there are “time masters” in every datacenter (redundantly), it is likely that both sides of a partition would continue to enjoy accurate time. Individual nodes however need network connectivity to the masters, and without it their clocks will drift. Thus, during a partition their intervals slowly grow wider over time, based on bounds on the rate of local clock drift. Operations depending on TrueTime, such as Paxos leader election or transaction commits, thus have to wait a little longer, but the operation still completes (assuming the 2PC and quorum communication are working).
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...Subhajit Sahu
Abstract — Levelwise PageRank is an alternative method of PageRank computation which decomposes the input graph into a directed acyclic block-graph of strongly connected components, and processes them in topological order, one level at a time. This enables calculation for ranks in a distributed fashion without per-iteration communication, unlike the standard method where all vertices are processed in each iteration. It however comes with a precondition of the absence of dead ends in the input graph. Here, the native non-distributed performance of Levelwise PageRank was compared against Monolithic PageRank on a CPU as well as a GPU. To ensure a fair comparison, Monolithic PageRank was also performed on a graph where vertices were split by components. Results indicate that Levelwise PageRank is about as fast as Monolithic PageRank on the CPU, but quite a bit slower on the GPU. Slowdown on the GPU is likely caused by a large submission of small workloads, and expected to be non-issue when the computation is performed on massive graphs.
Adjusting Bitset for graph : SHORT REPORT / NOTESSubhajit Sahu
Compressed Sparse Row (CSR) is an adjacency-list based graph representation that is commonly used for efficient graph computations. Unfortunately, using CSR for dynamic graphs is impractical since addition/deletion of a single edge can require on average (N+M)/2 memory accesses, in order to update source-offsets and destination-indices. A common approach is therefore to store edge-lists/destination-indices as an array of arrays, where each edge-list is an array belonging to a vertex. While this is good enough for small graphs, it quickly becomes a bottleneck for large graphs. What causes this bottleneck depends on whether the edge-lists are sorted or unsorted. If they are sorted, checking for an edge requires about log(E) memory accesses, but adding an edge on average requires E/2 accesses, where E is the number of edges of a given vertex. Note that both addition and deletion of edges in a dynamic graph require checking for an existing edge, before adding or deleting it. If edge lists are unsorted, checking for an edge requires around E/2 memory accesses, but adding an edge requires only 1 memory access.
Techniques to optimize the pagerank algorithm usually fall in two categories. One is to try reducing the work per iteration, and the other is to try reducing the number of iterations. These goals are often at odds with one another. Skipping computation on vertices which have already converged has the potential to save iteration time. Skipping in-identical vertices, with the same in-links, helps reduce duplicate computations and thus could help reduce iteration time. Road networks often have chains which can be short-circuited before pagerank computation to improve performance. Final ranks of chain nodes can be easily calculated. This could reduce both the iteration time, and the number of iterations. If a graph has no dangling nodes, pagerank of each strongly connected component can be computed in topological order. This could help reduce the iteration time, no. of iterations, and also enable multi-iteration concurrency in pagerank computation. The combination of all of the above methods is the STICD algorithm. [sticd] For dynamic graphs, unchanged components whose ranks are unaffected can be skipped altogether.
Adjusting primitives for graph : SHORT REPORT / NOTESSubhajit Sahu
Graph algorithms, like PageRank Compressed Sparse Row (CSR) is an adjacency-list based graph representation that is
Multiply with different modes (map)
1. Performance of sequential execution based vs OpenMP based vector multiply.
2. Comparing various launch configs for CUDA based vector multiply.
Sum with different storage types (reduce)
1. Performance of vector element sum using float vs bfloat16 as the storage type.
Sum with different modes (reduce)
1. Performance of sequential execution based vs OpenMP based vector element sum.
2. Performance of memcpy vs in-place based CUDA based vector element sum.
3. Comparing various launch configs for CUDA based vector element sum (memcpy).
4. Comparing various launch configs for CUDA based vector element sum (in-place).
Sum with in-place strategies of CUDA mode (reduce)
1. Comparing various launch configs for CUDA based vector element sum (in-place).
Experiments with Primitive operations : SHORT REPORT / NOTESSubhajit Sahu
This includes:
- Multiply with different modes (map)
1. Performance of sequential execution based vs OpenMP based vector multiply.
2. Comparing various launch configs for CUDA based vector multiply.
- Sum with different storage types (reduce)
1. Performance of vector element sum using float vs bfloat16 as the storage type.
- Sum with different modes (reduce)
1. Performance of sequential execution based vs OpenMP based vector element sum.
2. Performance of memcpy vs in-place based CUDA based vector element sum.
3. Comparing various launch configs for CUDA based vector element sum (memcpy).
4. Comparing various launch configs for CUDA based vector element sum (in-place).
- Sum with in-place strategies of CUDA mode (reduce)
1. Comparing various launch configs for CUDA based vector element sum (in-place).
Techniques to optimize the pagerank algorithm usually fall in two categories. One is to try reducing the work per iteration, and the other is to try reducing the number of iterations. These goals are often at odds with one another. Skipping computation on vertices which have already converged has the potential to save iteration time. Skipping in-identical vertices, with the same in-links, helps reduce duplicate computations and thus could help reduce iteration time. Road networks often have chains which can be short-circuited before pagerank computation to improve performance. Final ranks of chain nodes can be easily calculated. This could reduce both the iteration time, and the number of iterations. If a graph has no dangling nodes, pagerank of each strongly connected component can be computed in topological order. This could help reduce the iteration time, no. of iterations, and also enable multi-iteration concurrency in pagerank computation. The combination of all of the above methods is the STICD algorithm. [sticd] For dynamic graphs, unchanged components whose ranks are unaffected can be skipped altogether.
Adjusting OpenMP PageRank : SHORT REPORT / NOTESSubhajit Sahu
For massive graphs that fit in RAM, but not in GPU memory, it is possible to take
advantage of a shared memory system with multiple CPUs, each with multiple cores, to
accelerate pagerank computation. If the NUMA architecture of the system is properly taken
into account with good vertex partitioning, the speedup can be significant. To take steps in
this direction, experiments are conducted to implement pagerank in OpenMP using two
different approaches, uniform and hybrid. The uniform approach runs all primitives required
for pagerank in OpenMP mode (with multiple threads). On the other hand, the hybrid
approach runs certain primitives in sequential mode (i.e., sumAt, multiply).
word2vec, node2vec, graph2vec, X2vec: Towards a Theory of Vector Embeddings o...Subhajit Sahu
Below are the important points I note from the 2020 paper by Martin Grohe:
- 1-WL distinguishes almost all graphs, in a probabilistic sense
- Classical WL is two dimensional Weisfeiler-Leman
- DeepWL is an unlimited version of WL graph that runs in polynomial time.
- Knowledge graphs are essentially graphs with vertex/edge attributes
ABSTRACT:
Vector representations of graphs and relational structures, whether handcrafted feature vectors or learned representations, enable us to apply standard data analysis and machine learning techniques to the structures. A wide range of methods for generating such embeddings have been studied in the machine learning and knowledge representation literature. However, vector embeddings have received relatively little attention from a theoretical point of view.
Starting with a survey of embedding techniques that have been used in practice, in this paper we propose two theoretical approaches that we see as central for understanding the foundations of vector embeddings. We draw connections between the various approaches and suggest directions for future research.
DyGraph: A Dynamic Graph Generator and Benchmark Suite : NOTESSubhajit Sahu
https://gist.github.com/wolfram77/54c4a14d9ea547183c6c7b3518bf9cd1
There exist a number of dynamic graph generators. Barbasi-Albert model iteratively attach new vertices to pre-exsiting vertices in the graph using preferential attachment (edges to high degree vertices are more likely - rich get richer - Pareto principle). However, graph size increases monotonically, and density of graph keeps increasing (sparsity decreasing).
Gorke's model uses a defined clustering to uniformly add vertices and edges. Purohit's model uses motifs (eg. triangles) to mimick properties of existing dynamic graphs, such as growth rate, structure, and degree distribution. Kronecker graph generators are used to increase size of a given graph, with power-law distribution.
To generate dynamic graphs, we must choose a metric to compare two graphs. Common metrics include diameter, clustering coefficient (modularity?), triangle counting (triangle density?), and degree distribution.
In this paper, the authors propose Dygraph, a dynamic graph generator that uses degree distribution as the only metric. The authors observe that many real-world graphs differ from the power-law distribution at the tail end. To address this issue, they propose binning, where the vertices beyond a certain degree (minDeg = min(deg) s.t. |V(deg)| < H, where H~10 is the number of vertices with a given degree below which are binned) are grouped into bins of degree-width binWidth, max-degree localMax, and number of degrees in bin with at least one vertex binSize (to keep track of sparsity). This helps the authors to generate graphs with a more realistic degree distribution.
The process of generating a dynamic graph is as follows. First the difference between the desired and the current degree distribution is calculated. The authors then create an edge-addition set where each vertex is present as many times as the number of additional incident edges it must recieve. Edges are then created by connecting two vertices randomly from this set, and removing both from the set once connected. Currently, authors reject self-loops and duplicate edges. Removal of edges is done in a similar fashion.
Authors observe that adding edges with power-law properties dominates the execution time, and consider parallelizing DyGraph as part of future work.
My notes on shared memory parallelism.
Shared memory is memory that may be simultaneously accessed by multiple programs with an intent to provide communication among them or avoid redundant copies. Shared memory is an efficient means of passing data between programs. Using memory for communication inside a single program, e.g. among its multiple threads, is also referred to as shared memory [REF].
A Dynamic Algorithm for Local Community Detection in Graphs : NOTESSubhajit Sahu
**Community detection methods** can be *global* or *local*. **Global community detection methods** divide the entire graph into groups. Existing global algorithms include:
- Random walk methods
- Spectral partitioning
- Label propagation
- Greedy agglomerative and divisive algorithms
- Clique percolation
https://gist.github.com/wolfram77/b4316609265b5b9f88027bbc491f80b6
There is a growing body of work in *detecting overlapping communities*. **Seed set expansion** is a **local community detection method** where a relevant *seed vertices* of interest are picked and *expanded to form communities* surrounding them. The quality of each community is measured using a *fitness function*.
**Modularity** is a *fitness function* which compares the number of intra-community edges to the expected number in a random-null model. **Conductance** is another popular fitness score that measures the community cut or inter-community edges. Many *overlapping community detection* methods **use a modified ratio** of intra-community edges to all edges with atleast one endpoint in the community.
Andersen et al. use a **Spectral PageRank-Nibble method** which minimizes conductance and is formed by adding vertices in order of decreasing PageRank values. Andersen and Lang develop a **random walk approach** in which some vertices in the seed set may not be placed in the final community. Clauset gives a **greedy method** that *starts from a single vertex* and then iteratively adds neighboring vertices *maximizing the local modularity score*. Riedy et al. **expand multiple vertices** via maximizing modularity.
Several algorithms for **detecting global, overlapping communities** use a *greedy*, *agglomerative approach* and run *multiple separate seed set expansions*. Lancichinetti et al. run **greedy seed set expansions**, each with a *single seed vertex*. Overlapping communities are produced by a sequentially running expansions from a node not yet in a community. Lee et al. use **maximal cliques as seed sets**. Havemann et al. **greedily expand cliques**.
The authors of this paper discuss a dynamic approach for **community detection using seed set expansion**. Simply marking the neighbours of changed vertices is a **naive approach**, and has *severe shortcomings*. This is because *communities can split apart*. The simple updating method *may fail even when it outputs a valid community* in the graph.
Scalable Static and Dynamic Community Detection Using Grappolo : NOTESSubhajit Sahu
A **community** (in a network) is a subset of nodes which are _strongly connected among themselves_, but _weakly connected to others_. Neither the number of output communities nor their size distribution is known a priori. Community detection methods can be divisive or agglomerative. **Divisive methods** use _betweeness centrality_ to **identify and remove bridges** between communities. **Agglomerative methods** greedily **merge two communities** that provide maximum gain in _modularity_. Newman and Girvan have introduced the **modularity metric**. The problem of community detection is then reduced to the problem of modularity maximization which is **NP-complete**. **Louvain method** is a variant of the _agglomerative strategy_, in that is a _multi-level heuristic_.
https://gist.github.com/wolfram77/917a1a4a429e89a0f2a1911cea56314d
In this paper, the authors discuss **four heuristics** for Community detection using the _Louvain algorithm_ implemented upon recently developed **Grappolo**, which is a parallel variant of the Louvain algorithm. They are:
- Vertex following and Minimum label
- Data caching
- Graph coloring
- Threshold scaling
With the **Vertex following** heuristic, the _input is preprocessed_ and all single-degree vertices are merged with their corresponding neighbours. This helps reduce the number of vertices considered in each iteration, and also help initial seeds of communities to be formed. With the **Minimum label heuristic**, when a vertex is making the decision to move to a community and multiple communities provided the same modularity gain, the community with the smallest id is chosen. This helps _minimize or prevent community swaps_. With the **Data caching** heuristic, community information is stored in a vector instead of a map, and is reused in each iteration, but with some additional cost. With the **Vertex ordering via Graph coloring** heuristic, _distance-k coloring_ of graphs is performed in order to group vertices into colors. Then, each set of vertices (by color) is processed _concurrently_, and synchronization is performed after that. This enables us to mimic the behaviour of the serial algorithm. Finally, with the **Threshold scaling** heuristic, _successively smaller values of modularity threshold_ are used as the algorithm progresses. This allows the algorithm to converge faster, and it has been observed a good modularity score as well.
From the results, it appears that _graph coloring_ and _threshold scaling_ heuristics do not always provide a speedup and this depends upon the nature of the graph. It would be interesting to compare the heuristics against baseline approaches. Future work can include _distributed memory implementations_, and _community detection on streaming graphs_.
Application Areas of Community Detection: A Review : NOTESSubhajit Sahu
This is a short review of Community detection methods (on graphs), and their applications. A **community** is a subset of a network whose members are *highly connected*, but *loosely connected* to others outside their community. Different community detection methods *can return differing communities* these algorithms are **heuristic-based**. **Dynamic community detection** involves tracking the *evolution of community structure* over time.
https://gist.github.com/wolfram77/09e64d6ba3ef080db5558feb2d32fdc0
Communities can be of the following **types**:
- Disjoint
- Overlapping
- Hierarchical
- Local.
The following **static** community detection **methods** exist:
- Spectral-based
- Statistical inference
- Optimization
- Dynamics-based
The following **dynamic** community detection **methods** exist:
- Independent community detection and matching
- Dependent community detection (evolutionary)
- Simultaneous community detection on all snapshots
- Dynamic community detection on temporal networks
**Applications** of community detection include:
- Criminal identification
- Fraud detection
- Criminal activities detection
- Bot detection
- Dynamics of epidemic spreading (dynamic)
- Cancer/tumor detection
- Tissue/organ detection
- Evolution of influence (dynamic)
- Astroturfing
- Customer segmentation
- Recommendation systems
- Social network analysis (both)
- Network summarization
- Privary, group segmentation
- Link prediction (both)
- Community evolution prediction (dynamic, hot field)
<br>
<br>
## References
- [Application Areas of Community Detection: A Review : PAPER](https://ieeexplore.ieee.org/document/8625349)
This paper discusses a GPU implementation of the Louvain community detection algorithm. Louvain algorithm obtains hierachical communities as a dendrogram through modularity optimization. Given an undirected weighted graph, all vertices are first considered to be their own communities. In the first phase, each vertex greedily decides to move to the community of one of its neighbours which gives greatest increase in modularity. If moving to no neighbour's community leads to an increase in modularity, the vertex chooses to stay with its own community. This is done sequentially for all the vertices. If the total change in modularity is more than a certain threshold, this phase is repeated. Once this local moving phase is complete, all vertices have formed their first hierarchy of communities. The next phase is called the aggregation phase, where all the vertices belonging to a community are collapsed into a single super-vertex, such that edges between communities are represented as edges between respective super-vertices (edge weights are combined), and edges within each community are represented as self-loops in respective super-vertices (again, edge weights are combined). Together, the local moving and the aggregation phases constitute a stage. This super-vertex graph is then used as input fof the next stage. This process continues until the increase in modularity is below a certain threshold. As a result from each stage, we have a hierarchy of community memberships for each vertex as a dendrogram.
Approaches to perform the Louvain algorithm can be divided into coarse-grained and fine-grained. Coarse-grained approaches process a set of vertices in parallel, while fine-grained approaches process all vertices in parallel. A coarse-grained hybrid-GPU algorithm using multi GPUs has be implemented by Cheong et al. which grabbed my attention. In addition, their algorithm does not use hashing for the local moving phase, but instead sorts each neighbour list based on the community id of each vertex.
https://gist.github.com/wolfram77/7e72c9b8c18c18ab908ae76262099329
Survey for extra-child-process package : NOTESSubhajit Sahu
Useful additions to inbuilt child_process module.
📦 Node.js, 📜 Files, 📰 Docs.
Please see attached PDF for literature survey.
https://gist.github.com/wolfram77/d936da570d7bf73f95d1513d4368573e
Dynamic Batch Parallel Algorithms for Updating PageRank : POSTERSubhajit Sahu
This paper presents two algorithms for efficiently computing PageRank on dynamically updating graphs in a batched manner: DynamicLevelwisePR and DynamicMonolithicPR. DynamicLevelwisePR processes vertices level-by-level based on strongly connected components and avoids recomputing converged vertices on the CPU. DynamicMonolithicPR uses a full power iteration approach on the GPU that partitions vertices by in-degree and skips unaffected vertices. Evaluation on real-world graphs shows the batched algorithms provide speedups of up to 4000x over single-edge updates and outperform other state-of-the-art dynamic PageRank algorithms.
Abstract for IPDPS 2022 PhD Forum on Dynamic Batch Parallel Algorithms for Up...Subhajit Sahu
For the PhD forum an abstract submission is required by 10th May, and poster by 15th May. The event is on 30th May.
https://gist.github.com/wolfram77/1c1f730d20b51e0d2c6d477fd3713024
Fast Incremental Community Detection on Dynamic Graphs : NOTESSubhajit Sahu
In this paper, the authors describe two approaches for dynamic community detection using the CNM algorithm. CNM is a hierarchical, agglomerative algorithm that greedily maximizes modularity. They define two approaches: BasicDyn and FastDyn. BasicDyn backtracks merges of communities until each marked (changed) vertex is its own singleton community. FastDyn undoes a merge only if the quality of merge, as measured by the induced change in modularity, has significantly decreased compared to when the merge initially took place. FastDyn also allows more than two vertices to contract together if in the previous time step these vertices eventually ended up contracted in the same community. In the static case, merging several vertices together in one contraction phase could lead to deteriorating results. FastDyn is able to do this, however, because it uses information from the merges of the previous time step. Intuitively, merges that previously occurred are more likely to be acceptable later.
https://gist.github.com/wolfram77/1856b108334cc822cdddfdfa7334792a
Building a Raspberry Pi Robot with Dot NET 8, Blazor and SignalRPeter Gallagher
In this session delivered at NDC Oslo 2024, I talk about how you can control a 3D printed Robot Arm with a Raspberry Pi, .NET 8, Blazor and SignalR.
I also show how you can use a Unity app on an Meta Quest 3 to control the arm VR too.
You can find the GitHub repo and workshop instructions here;
https://bit.ly/dotnetrobotgithub
1. 16 September 2008 ACM QUEUE rants: feedback@acmqueue.com
Bryan Cantrill and Jeff Bonwick,
Sun Microsystems
2. ACM QUEUE September 2008 17more queue: www.acmqueue.com
S
oftware practitioners today could be forgiven
if recent microprocessor developments have
given them some trepidation about the future
of software. While Moore’s law continues to
hold (that is, transistor density continues to
double roughly every 18 months), as a result of
both intractable physical limitations and prac-
tical engineering considerations, that increasing density
is no longer being spent on boosting clock rate. Instead, it
is being used to put multiple CPU cores on a single CPU
die. From the software perspective, this is not a revolu-
tionary shift, but rather an evolutionary one: multicore
CPUs are not the birthing of a new paradigm, but rather
the progression of an old one (multiprocessing) into
more widespread deployment. Judging from many recent
articles and papers on the subject, however, one might
think that this blossoming of concurrency is the coming
of the apocalypse, that “the free lunch is over.”1
As practitioners who have long been at the coal face of
concurrent systems, we hope to inject some calm reality
(if not some hard-won wisdom) into a discussion that has
too often descended into hysterics. Specifically, we hope
to answer the essential question: what does the prolif-
eration of concurrency mean for the software that you
develop? Perhaps regrettably, the answer to that question
is neither simple nor universal—your software’s relation-
ship to concurrency depends on where it physically
executes, where it is in the stack of abstraction, and the
business model that surrounds it.
Given that many software projects now have compo-
nents in different layers of the abstraction stack spanning
different tiers of the architecture, you may well find that
even for the software that you write, you do not have one
answer but several: you may be able to leave some of your
code forever executing in sequential bliss, and some may
need to be highly parallel and explicitly multithreaded.
Further complicating the answer, we will argue that much
of your code will not fall neatly into either category: it
will be essentially sequential in nature but will need to be
aware of concurrency at some level.
Chances are you won’t
actually have to write
multithreaded code.
But if you do,
some key principles
will help you master
this “black art.”
Real-world
CONCURRENCY
3. 18 September 2008 ACM QUEUE rants: feedback@acmqueue.com
Although we assert that less—much less—code needs
to be parallel than some might fear, it is nonetheless
true that writing parallel code remains something of a
black art. We also therefore give specific implementa-
tion techniques for developing a highly parallel system.
As such, this article is somewhat dichotomous: we try
both to argue that most code can (and should) achieve
concurrency without explicit parallelism, and at the same
time to elucidate techniques for those who must write
explicitly parallel code. This article is half stern lecture on
the merits of abstinence and half Kama Sutra.
SOME HISTORICAL CONTEXT
Before we discuss concurrency with respect to today’s
applications, it would be helpful to explore the history
of concurrent execution. Even by the 1960s—when the
world was still wet with the morning dew of the com-
puter age—it was becoming clear that a single central pro-
cessing unit executing a single instruction stream would
result in unnecessarily limited system performance. While
computer designers experimented with different ideas
to circumvent this limitation, it was the introduction of
the Burroughs B5000 in 1961 that proffered the idea that
ultimately proved to be the way forward: disjoint CPUs
concurrently executing different instruction streams but
sharing a common memory. In this regard (as in many)
the B5000 was at least a decade ahead of its time. It was
not until the 1980s that the need for multiprocessing
became clear to a wider body of researchers, who over the
course of the decade explored cache coherence protocols
(e.g., the Xerox Dragon and DEC Firefly), prototyped par-
allel operating systems (e.g., multiprocessor Unix running
on the AT&T 3B20A), and developed parallel databases
(e.g., Gamma at the University of Wisconsin).
In the 1990s, the seeds planted by researchers in the
1980s bore the fruit of practical systems, with many
computer companies (e.g., Sun, SGI, Sequent, Pyramid)
placing big bets on symmetric multiprocessing. These
bets on concurrent hardware necessitated correspond-
ing bets on concurrent software—if an operating system
cannot execute in parallel, neither can much else in the
system—and these companies independently came to
the realization that their operating systems would need
to be rewritten around the notion of concurrent execu-
tion. These rewrites took place early in the 1990s, and the
resulting systems were polished over the decade; much
of the resulting technology can today be seen in open
source operating systems such as OpenSolaris, FreeBSD,
and Linux.
Just as several computer companies made big bets
around multiprocessing, several database vendors made
bets around highly parallel relational databases; upstarts
including Oracle, Teradata, Tandem, Sybase, and Infor-
mix needed to use concurrency to achieve a performance
advantage over the mainframes that had dominated
transaction processing until that time.2
As in operating
systems, this work was conceived in the late 1980s and
early 1990s, and incrementally improved over the course
of the decade.
The upshot of these trends was that by the end of the
1990s, concurrent systems had displaced their unipro-
cessor forebears as high-performance computers: when
the TOP500 list of supercomputers was first drawn up in
1993, the highest-performing uniprocessor in the world
was just #34, and more than 80 percent of the top 500
were multiprocessors of one flavor or another. By 1997,
uniprocessors were off the list entirely. Beyond the super-
computing world, many transaction-oriented applications
scaled with CPU, allowing users to realize the dream of
expanding a system without revisiting architecture.
The rise of concurrent systems in the 1990s coincided
with another trend: while CPU clock rate continued to
increase, the speed of main memory was not keeping up.
To cope with this relatively slower memory, microproces-
sor architects incorporated deeper (and more compli-
cated) pipelines, caches, and prediction units. Even then,
the clock rates themselves were quickly becoming some-
thing of a fib: while the CPU might be able to execute
at the advertised rate, only a slim fraction of code could
actually achieve (let alone surpass) the rate of one cycle
per instruction—most code was mired spending three,
four, five (or many more) cycles per instruction.
Many saw these two trends—the rise of concurrency
and the futility of increasing clock rate—and came to the
logical conclusion: instead of spending transistor budget
on “faster” CPUs that weren’t actually yielding much
in terms of performance gains (and had terrible costs in
terms of power, heat, and area), why not take advantage
of the rise of concurrent software and use transistors to
effect multiple (simpler) cores per die?
Real-world
CONCURRENCY
4. ACM QUEUE September 2008 19more queue: www.acmqueue.com
That it was the success of concurrent software that
contributed to the genesis of chip multiprocessing is an
incredibly important historical point and bears reempha-
sis. There is a perception that microprocessor architects
have—out of malice, cowardice, or despair—inflicted
concurrency on software.3
In reality, the opposite is the
case: it was the maturity of concurrent software that
led architects to consider concurrency on the die. (The
reader is referred to one of the earliest chip multiproces-
sors—DEC’s Piranha—for a detailed discussion of this
motivation.4
) Were software not ready, these microproces-
sors would not be commercially viable today. If anything,
the “free lunch” that some decry as being over is in fact,
at long last, being served. One need only be hungry and
know how to eat!
CONCURRENCY IS FOR PERFORMANCE
The most important conclusion from this foray into his-
tory is that concurrency has always been employed for
one purpose: to improve the performance of the system.
This seems almost too obvious to make explicit—why else
would we want concurrency if not to improve perfor-
mance?—yet for all its obviousness, concurrency’s raison
d’être is increasingly forgotten, as if the proliferation of
concurrent hardware has awakened an anxiety that all
software must use all available physical resources. Just as
no programmer felt a moral obligation to eliminate pipe-
line stalls on a superscalar microprocessor, no software
engineer should feel responsible for using concurrency
simply because the hardware supports it. Rather, concur-
rency should be thought about and used for one reason
and one reason only: because it is needs to yield an
acceptably performing system.
Concurrent execution can improve performance in
three fundamental ways: it can reduce latency (that is,
make a unit of work execute faster); it can hide latency
(that is, allow the system to continue doing work during
a long-latency operation); or it can increase throughput
(that is, make the system able to perform more work).
Using concurrency to reduce latency is highly prob-
lem-specific in that it requires a parallel algorithm for
the task at hand. For some kinds of problems—especially
those found in scientific computing—this is straightfor-
ward: work can be divided a priori, and multiple com-
pute elements set on the task. Many of these problems,
however, are often so parallelizable that they do not
require the tight coupling of a shared memory—and they
are often able to execute more economically on grids of
small machines instead of a smaller number of highly
concurrent ones. Further, using concurrency to reduce
latency requires that a unit of work be long enough in its
execution to amortize the substantial costs of coordinat-
ing multiple compute elements: one can envision using
concurrency to parallelize a sort of 40 million elements—
but a sort of a mere 40 elements is unlikely to take
enough compute time to pay the overhead of parallelism.
In short, the degree to which one can use concurrency to
reduce latency depends much more on the problem than
on those endeavoring to solve it—and many important
problems are simply not amenable to it.
For long-running operations that cannot be parallel-
ized, concurrent execution can instead be used to perform
useful work while the operation is pending; in this model,
the latency of the operation is not reduced, but it is hid-
den by the progression of the system. Using concurrency
to hide latency is particularly tempting when the opera-
tions themselves are likely to block on entities outside
of the program—for example, a disk I/O operation or a
DNS lookup. Tempting though it may be, one must be
very careful when considering using concurrency merely
to hide latency: having a parallel program can become a
substantial complexity burden to bear just for improved
responsiveness. Further, concurrent execution is not the
only way to hide system-induced latencies: one can often
achieve the same effect by employing nonblocking opera-
tions (e.g., asynchronous I/O) and an event loop (e.g.,
the poll()/select() calls found in Unix) in an otherwise
sequential program. Programmers who wish to hide
latency should therefore consider concurrent execution as
an option, not as a foregone conclusion.
When problems resist parallelization or have no
appreciable latency to hide, the third way that concur-
rent execution can improve performance is to increase
the throughput of the system. Instead of using parallel
logic to make a single operation faster, one can employ
multiple concurrent executions of sequential logic to
accommodate more simultaneous work. Importantly, a
system using concurrency to increase throughput need
not consist exclusively (or even largely) of multithreaded
code. Rather, those components of the system that
share no state can be left entirely sequential, with the
system executing multiple instances of these compo-
nents concurrently. The sharing in the system can then
be offloaded to components explicitly designed around
parallel execution on shared state, which can ideally be
reduced to those elements already known to operate well
in concurrent environments: the database and/or the
operating system.
5. 20 September 2008 ACM QUEUE rants: feedback@acmqueue.com
To make this concrete, in a typical MVC (model-view-
controller) application, the view (typically implemented
in environments such as JavaScript, PHP, or Flash) and
the controller (typically implemented in environments
such as J2EE or Ruby on Rails) can consist purely of
sequential logic and still achieve high levels of concur-
rency, provided that the model (typically implemented
in terms of a database) allows for parallelism. Given that
most don’t write their own databases (and virtually no
one writes their own operating systems), it is possible to
build (and indeed, many have built) highly concurrent,
highly scalable MVC systems without explicitly creating a
single thread or acquiring a single lock; it is concurrency
by architecture instead of by implementation.
ILLUMINATING THE BLACK ART
What if you are the one developing the operating system
or database or some other body of code that must be
explicitly parallelized? If you count yourself among the
relative few who need to write such code, you presum-
ably do not need to be warned that writing multithreaded
code is hard. In fact, this domain’s reputation for dif-
ficulty has led some to conclude (mistakenly) that writing
multithreaded code is simply impossible: “No one knows
how to organize and maintain large systems that rely on
locking,” reads one recent (and typical) assertion.5
Part
of the difficulty of writing scalable and correct multi-
threaded code is the scarcity of written wisdom from
experienced practitioners: oral tradition in lieu of formal
writing has left the domain shrouded in mystery. So in
the spirit of making this domain less mysterious for our
fellow practitioners (if not also to demonstrate that some
of us actually do know how to organize and maintain
large lock-based systems), we present our collective bag of
tricks for writing multithreaded code.
Know your cold paths from your hot paths. If there
is one piece of advice to dispense to those who must
develop parallel systems, it is to know which paths
through your code you want to be able to execute in
parallel (the hot paths) versus which paths can execute
sequentially without affecting performance (the cold
paths). In our experience, much of the software we
write is bone-cold in terms of concurrent execution: it
is executed only when initializing, in administrative
paths, when unloading, etc. Not only is it a waste of time
to make such cold paths execute with a high degree of
parallelism, but it is also dangerous: these paths are often
among the most difficult and error-prone to parallelize.
In cold paths, keep the locking as coarse-grained as
possible. Don’t hesitate to have one lock that covers a
wide range of rare activity in your subsystem. Conversely,
in hot paths—those that must execute concurrently to
deliver highest throughput—you must be much more
careful: locking strategies must be simple and fine-
grained, and you must be careful to avoid activity that
can become a bottleneck. And what if you don’t know if a
given body of code will be the hot path in the system? In
the absence of data, err on the side of assuming that your
code is in a cold path and adopt a correspondingly coarse-
grained locking strategy—but be prepared to be proven
wrong by the data.
Intuition is frequently wrong—be data intensive. In
our experience, many scalability problems can be attrib-
uted to a hot path that the developing engineer originally
believed (or hoped) to be a cold path. When cutting
new software from whole cloth, you will need some
intuition to reason about hot and cold paths—but once
your software is functional, even in prototype form, the
time for intuition has ended: your gut must defer to the
data. Gathering data on a concurrent system is a tough
problem in its own right. It requires you first to have a
machine that is sufficiently concurrent in its execution
to be able to highlight scalability problems. Once you
have the physical resources, it requires you to put load
on the system that resembles the load you expect to see
when your system is deployed into production. Once the
machine is loaded, you must have the infrastructure to be
able to dynamically instrument the system to get to the
root of any scalability problems.
The first of these problems has historically been acute:
there was a time when multiprocessors were so rare that
many software development shops simply didn’t have
access to one. Fortunately, with the rise of multicore
CPUs, this is no longer a problem: there is no longer any
excuse for not being able to find at least a two-processor
(dual-core) machine, and with only a little effort, most
will be able (as of this writing) to run their code on an
eight-processor (two-socket, quad-core) machine.
Even as the physical situation has improved, however,
the second of these problems—knowing how to put load
on the system—has worsened: production deployments
have become increasingly complicated, with loads that
Real-world
CONCURRENCY
6. ACM QUEUE September 2008 21more queue: www.acmqueue.com
are difficult and expensive to simulate in development.
As much as possible, you must treat load generation and
simulation as a first-class problem; the earlier you tackle
this problem in your development, the earlier you will be
able to get critical data that may have tremendous impli-
cations for your software. Although a test load should
mimic its production equivalent as closely as possible,
timeliness is more important than absolute accuracy: the
absence of a perfect load simulation should not prevent
you from simulating load altogether, as it is much better
to put a multithreaded system under the wrong kind of
load than under no load whatsoever.
Once a system is loaded—be it in development or in
production—it is useless to software development if the
impediments to its scalability can’t be understood. Under-
standing scalability inhibitors on a production system
requires the ability to safely dynamically instrument its
synchronization primitives. In developing Solaris, our
need for this was so historically acute that it led one of us
(Bonwick) to develop a technology (lockstat) to do this
in 1997. This tool became instantly essential—we quickly
came to wonder how we ever resolved scalability prob-
lems without it—and it led the other of us (Cantrill) to
further generalize dynamic instrumentation into DTrace,
a system for nearly arbitrary dynamic instrumentation of
production systems that first shipped in Solaris in 2004,
and has since been ported to many other systems includ-
ing FreeBSD and Mac OS.6
(The instrumentation method-
ology in lockstat has been reimplemented to be a DTrace
provider, and the tool itself has been reimplemented to be
a DTrace consumer.)
Today, dynamic instrumentation continues to provide
us with the data we need not only to find those parts
of the system that are inhibiting scalability, but also to
gather sufficient data to understand which techniques
will be best suited for reducing that contention. Proto-
typing new locking strategies is expensive, and one’s
intuition is frequently wrong; before breaking up a lock
or rearchitecting a subsystem to make it more parallel,
we always strive to have the data in hand indicating that
the subsystem’s lack of parallelism is a clear inhibitor to
system scalability!
Know when—and when not—to break up a lock.
Global locks can naturally become scalability inhibitors,
and when gathered data indicates a single hot lock, it
is reasonable to want to break up the lock into per-CPU
locks, a hash table of locks, per-structure locks, etc. This
might ultimately be the right course of action, but before
blindly proceeding down that (complicated) path, care-
fully examine the work done under the lock: breaking
up a lock is not the only way to reduce contention, and
contention can be (and often is) more easily reduced by
decreasing the hold time of the lock. This can be done
by algorithmic improvements (many scalability improve-
ments have been achieved by reducing execution under
the lock from quadratic time to linear time!) or by finding
activity that is needlessly protected by the lock. Here’s a
classic example of this latter case: if data indicates that
you are spending time (say) deallocating elements from a
shared data structure, you could dequeue and gather the
data that needs to be freed with the lock held and defer
the actual deallocation of the data until after the lock is
dropped. Because the data has been removed from the
shared data structure under the lock, there is no data race
(other threads see the removal of the data as atomic), and
lock hold time has been decreased with only a modest
increase in implementation complexity.
Be wary of readers/writer locks. If there is a novice
error when trying to break up a lock, it is this: seeing
that a data structure is frequently accessed for reads and
infrequently accessed for writes, one may be tempted
to replace a mutex guarding the structure with a read-
ers/writer lock to allow for concurrent readers. This seems
reasonable, but unless the hold time for the lock is long,
this solution will scale no better (and indeed, may scale
worse) than having a single lock. Why? Because the
state associated with the readers/writer lock must itself
be updated atomically, and in the absence of a more
sophisticated (and less space-efficient) synchronization
primitive, a readers/writer lock will use a single word
of memory to store the number of readers. Because the
number of readers must be updated atomically, acquiring
the lock as a reader requires the same bus transaction—a
read-to-own—as acquiring a mutex, and contention on
that line can hurt every bit as much.
There are still many situations where long hold times
(e.g., performing I/O under a lock as reader) more than
pay for any memory contention, but one should be sure
to gather data to make sure that it is having the desired
effect on scalability. Even in those situations where a
readers/writer lock is appropriate, an additional note of
caution is warranted around blocking semantics. If, for
example, the lock implementation blocks new readers
when a writer is blocked (a common paradigm to avoid
writer starvation), one cannot recursively acquire a lock as
reader: if a writer blocks between the initial acquisition as
reader and the recursive acquisition as reader, deadlock
will result when the recursive acquisition is blocked. All
of this is not to say that readers/writer locks shouldn’t be
used—just that they shouldn’t be romanticized.
7. 22 September 2008 ACM QUEUE rants: feedback@acmqueue.com
Consider per-CPU locking. Per-CPU locking (that is,
acquiring a lock based on the current CPU identifier) can
be a convenient technique for diffracting contention,
as a per-CPU lock is not likely to be contended (a CPU
can run only one thread at a time). If one has short hold
times and operating modes that have different coherence
requirements, one can have threads acquire a per-CPU
lock in the common (noncoherent) case, and then force
the uncommon case to grab all the per-CPU locks to
construct coherent state. Consider this concrete (if trivial)
example: if one were implementing a global counter that
is frequently updated but infrequently read, one could
implement a per-CPU counter protected by its own lock.
Updates to the counter would update only the per-CPU
copy, and in the uncommon case in which one wanted to
read the counter, all per-CPU locks could be acquired and
their corresponding values summed.
Two notes on this technique: first, it should be
employed only when the data indicates that it’s neces-
sary, as it clearly introduces substantial complexity into
the implementation; second, be sure to have a single
order for acquiring all locks in the cold path: if one case
acquires the per-CPU locks from lowest to highest and
another acquires them from highest to lowest, deadlock
will (naturally) result.
Know when to broadcast—and when to signal. Virtu-
ally all condition variable implementations allow threads
waiting on the variable to be awakened either via a signal
(in which case one thread sleeping on the variable is
awakened) or via a broadcast (in which case all threads
sleeping on the variable are awakened). These constructs
have subtly different semantics: because a broadcast will
awaken all waiting threads, it should generally be used
to indicate state change rather than resource availability.
If a condition broadcast is used when a condition signal
would have been more appropriate, the result will be a
thundering herd: all waiting threads will wake up, fight
over the lock protecting the condition variable, and
(assuming that the first thread to acquire the lock also
consumes the available resource) sleep once again when
they discover that the resource has been consumed.
This needless scheduling and locking activity can have
a serious effect on performance, especially in Java-based
systems, where notifyAll() (i.e., broadcast) seems to have
entrenched itself as a preferred paradigm; changing these
calls to notify() (i.e., signal) has been known to result in
substantial performance gains.7
Learn to debug postmortem. Among some Cassan-
dras of concurrency, a deadlock seems to be a particular
bogeyman of sorts, having become the embodiment of
all that is difficult in lock-based multithreaded program-
ming. This fear is somewhat peculiar, because deadlocks
are actually among the simplest pathologies in software:
because (by definition) the threads involved in a deadlock
cease to make forward progress, they do the implementer
the service of effectively freezing the system with all state
intact. To debug a deadlock, one need have only a list of
threads, their corresponding stack backtraces, and some
knowledge of the system. This information is contained
in a snapshot of state so essential to software develop-
ment that its very name reflects its origins at the dawn of
computing: it is a core dump.
Debugging from a core dump—postmortem debug-
ging—is an essential skill for those who implement
parallel systems: problems in highly parallel systems are
not necessarily reproducible, and a single core dump is
often one’s only chance to debug them. Most debuggers
support postmortem debugging, and many allow user-
defined extensions.8
We encourage practitioners to under-
stand their debugger’s support for postmortem debugging
(especially of parallel programs) and to develop exten-
sions specific to debugging their systems.
Design your systems to be composable. Among the
more galling claims of the detractors of lock-based sys-
tems is the notion that they are somehow uncomposable:
“Locks and condition variables do not support modular
programming,” reads one typically brazen claim, “build-
ing large programs by gluing together smaller programs[:]
locks make this impossible.”9
The claim, of course,
is incorrect. For evidence one need only point at the
composition of lock-based systems such as databases and
operating systems into larger systems that remain entirely
unaware of lower-level locking.
There are two ways to make lock-based systems com-
pletely composable, and each has its own place. First (and
most obviously), one can make locking entirely internal
to the subsystem. For example, in concurrent operating
systems, control never returns to user level with in-kernel
locks held; the locks used to implement the system itself
are entirely behind the system call interface that con-
stitutes the interface to the system. More generally, this
model can work whenever a crisp interface exists between
Real-world
CONCURRENCY
8. ACM QUEUE September 2008 23more queue: www.acmqueue.com
software components: as long as control flow is never
returned to the caller with locks held, the subsystem will
remain composable.
Second (and perhaps counterintuitively), one can
achieve concurrency and composability by having no
locks whatsoever. In this case, there must be no global
subsystem state—subsystem state must be captured in
per-instance state, and it must be up to consumers of the
subsystem to assure that they do not access their instance
in parallel. By leaving locking up to the client of the sub-
system, the subsystem itself can be used concurrently by
different subsystems and in different contexts. A concrete
example of this is the AVL tree implementation used
extensively in the Solaris kernel. As with any balanced
binary tree, the implementation is sufficiently complex
to merit componentization, but by not having any global
state, the implementation may be used concurrently by
disjoint subsystems—the only constraint is that manipu-
lation of a single AVL tree instance must be serialized.
Don’t use a semaphore where a mutex would suf-
fice. A semaphore is a generic synchronization primi-
tive originally described by Dijkstra that can be used to
effect a wide range of behavior. It may be tempting to use
semaphores in lieu of mutexes to protect critical sections,
but there is an important difference between the two
constructs: unlike a semaphore, a mutex has a notion of
ownership—the lock is either owned or not, and if it is
owned, it has a known owner. By contrast, a semaphore
(and its kin, the condition variable) has no notion of
ownership: when sleeping on a semaphore, one has no
way of knowing which thread one is blocking upon.
The lack of ownership presents several problems when
used to protect critical sections. First, there is no way of
propagating the blocking thread’s scheduling priority
to the thread that is in the critical section. This ability
to propagate scheduling priority—priority inheritance—is
critical in a realtime system, and in the absence of other
protocols, semaphore-based systems will always be
vulnerable to priority inversions. A second problem with
the lack of ownership is that it deprives the system of the
ability to make assertions about itself. For example, when
ownership is tracked, the machinery that implements
thread blocking can detect pathologies such as deadlocks
and recursive lock acquisitions, inducing fatal failure (and
that all-important core dump) upon detection. Finally,
the lack of ownership makes debugging much more
onerous. A common pathology in a multithreaded system
is a lock not being dropped in some errant return path.
When ownership is tracked, one at least has the smoking
gun of the past (faulty) owner—and, thus, clues as to the
code path by which the lock was not correctly dropped.
Without ownership, one is left clueless and reduced to
debugging by staring at code/the ceiling/into space.
All of this is not to say that semaphores shouldn’t be
used (indeed, some problems are uniquely suited to a
semaphore’s semantics), just that they shouldn’t be used
when mutexes would suffice.
Consider memory retiring to implement per-chain
hash-table locks. Hash tables are common data structures
in performance-critical systems software, and sometimes
they must be accessed in parallel. In this case, adding a
lock to each hash chain, with the per-chain lock held
while readers or writers iterate over the chain, seems
straightforward. The problem, however, is resizing the
table: dynamically resizing a hash table is central to its
efficient operation, and the resize means changing the
memory that contains the table. That is, in a resize the
pointer to the hash table must change—but we do not
wish to require hash lookups to acquire a global lock to
determine the current hash table!
This problem has several solutions, but a (relatively)
straightforward one is to retire memory associated with
old hash tables instead of freeing it. On a resize, all
per-chain locks are acquired (using a well-defined order
to prevent deadlock), and a new table is then allocated,
with the contents of the old hash table being rehashed
into the new table. After this operation, the old table is
not deallocated but rather placed in a queue of old hash
tables. Hash lookups then require a slight modification to
operate correctly: after acquiring the per-chain lock, the
lookup must check the hash-table pointer and compare
it with the hash-table pointer that was used to determine
the hash chain. If the hash table has changed (that is,
if a hash resize has occurred), it must drop the lock and
repeat the lookup (which will acquire the correct chain
lock in the new table).
There are some delicate issues in implementing
this—the hash-table pointer must be declared volatile,
There is universal
agreement that
writing
multithreaded code
is difficult.
9. 24 September 2008 ACM QUEUE rants: feedback@acmqueue.com
and the size of the hash table must be contained in the
table itself—but the implementation complexity is modest
given the alternatives, and (assuming hash tables are
doubled when they are resized) the cost in terms of
memory is only a factor of two. For an example of this
in production code, the reader is directed to the file
descriptor locking in Solaris, the source code for which
can be found by searching the Internet for “flist_grow.”
Be aware of false sharing. There are a variety of differ-
ent protocols for keeping memory coherent in caching
multiprocessor systems. Typically, these protocols dictate
that only a single cache may have a given line of memory
in a dirty state. If a different cache wishes to write to the
dirty line, the new cache must first read-to-own the dirty
line from the owning cache. The size of the line used for
coherence (the coherence granularity) has an important
ramification for parallel software: because only one cache
may own a line a given time, one wishes to avoid a situ-
ation where two (or more) small, disjoint data structures
are both contained within a single line and accessed in
parallel by disjoint caches. This situation—called false
sharing—can induce suboptimal scalability in otherwise
scalable software. This most frequently arises in practice
when one attempts to defract contention with an array of
locks: the size of a lock structure is typically no more than
the size of a pointer or two and is usually quite a bit less
than the coherence granularity (which is typically on the
order of 64 bytes). Disjoint CPUs acquiring different locks
can therefore potentially contend for the same cache line.
False sharing is excruciating to detect dynamically: it
requires not only a bus analyzer, but also a way of trans-
lating from the physical addresses of the bus to the virtual
addresses that make sense to software, and then from
there to the actual structures that are inducing the false
sharing. (This process is so arduous and error-prone that
we have experimented—with some success—with static
mechanisms to detect false sharing.10
) Fortunately, false
sharing is rarely the single greatest scalability inhibitor
in a system, and it can be expected to be even less of an
issue on a multicore system (where caches are more likely
to be shared among CPUs). Nonetheless, this remains an
issue that the practitioner should be aware of, especially
when creating arrays that are designed to be accessed in
parallel. (In this situation, array elements should be pad-
ded out to be a multiple of the coherence granularity.)
Consider using nonblocking synchronization routines
to monitor contention. Many synchronization primitives
have different entry points to specify different behavior if
the primitive is unavailable: the default entry point will
typically block, whereas an alternative entry point will
return an error code instead of blocking. This second vari-
ant has a number of uses, but a particularly interesting
one is the monitoring of one’s own contention: when an
attempt to acquire a synchronization primitive fails, the
subsystem can know that there is contention. This can be
especially useful if a subsystem has a way of dynamically
reducing its contention. For example, the Solaris kernel
memory allocator has per-CPU caches of memory buffers.
When a CPU exhausts its per-CPU caches, it must obtain
a new series of buffers from a global pool. Instead of
simply acquiring a lock in this case, the code attempts to
acquire the lock, incrementing a counter when this fails
(and then acquiring the lock through the blocking entry
point). If the counter reaches a predefined threshold, the
size of the per-CPU caches is increased, thereby dynami-
cally reducing contention.
When reacquiring locks, consider using generation
counts to detect state change. When lock ordering
becomes complicated, at times one will need to drop
one lock, acquire another, and then reacquire the first.
This can be tricky, as state protected by the first lock
may have changed during the time that the lock was
dropped—and reverifying this state may be exhausting,
inefficient, or even impossible. In these cases, consider
associating a generation count with the data structure;
when a change is made to the data structure, a genera-
tion count is bumped. The logic that drops and reacquires
the lock must cache the generation before dropping the
lock, and then check the generation upon reacquisition:
if the counts are the same, the data structure is as it was
when the lock was dropped and the logic may proceed; if
the count is different, the state has changed and the logic
may react accordingly (for example, by reattempting the
larger operation).
Use wait- and lock-free structures only if you abso-
lutely must. Over our careers, we have each implemented
wait- and lock-free data structures in production code, but
we did this only in contexts in which locks could not be
acquired for reasons of correctness. Examples include the
implementation of the locking system itself,11
the sub-
systems that span interrupt levels, and dynamic instru-
mentation facilities.12
These constrained contexts are the
Real-world
CONCURRENCY