Highlighted notes of:
Chapter 4: Parallel Programming in CUDA C
Book:
CUDA by Example
An Introduction to General Purpose GPU Computing
Authors:
Jason Sanders
Edward Kandrot
“This book is required reading for anyone working with accelerator-based computing systems.”
–From the Foreword by Jack Dongarra, University of Tennessee and Oak Ridge National Laboratory
CUDA is a computing architecture designed to facilitate the development of parallel programs. In conjunction with a comprehensive software platform, the CUDA Architecture enables programmers to draw on the immense power of graphics processing units (GPUs) when building high-performance applications. GPUs, of course, have long been available for demanding graphics and game applications. CUDA now brings this valuable resource to programmers working on applications in other domains, including science, engineering, and finance. No knowledge of graphics programming is required–just the ability to program in a modestly extended version of C.
CUDA by Example, written by two senior members of the CUDA software platform team, shows programmers how to employ this new technology. The authors introduce each area of CUDA development through working examples. After a concise introduction to the CUDA platform and architecture, as well as a quick-start guide to CUDA C, the book details the techniques and trade-offs associated with each key CUDA feature. You’ll discover when to use each CUDA C extension and how to write CUDA software that delivers truly outstanding performance.
Table of Contents
Why CUDA? Why Now?
Getting Started
Introduction to CUDA C
Parallel Programming in CUDA C
Thread Cooperation
Constant Memory and Events
Texture Memory
Graphics Interoperability
Atomics
Streams
CUDA C on Multiple GPUs
The Final Countdown
All the CUDA software tools you’ll need are freely available for download from NVIDIA.
Jason Sanders is a senior software engineer in NVIDIA’s CUDA Platform Group, helped develop early releases of CUDA system software and contributed to the OpenCL 1.0 Specification, an industry standard for heterogeneous computing. He has held positions at ATI Technologies, Apple, and Novell.
Edward Kandrot is a senior software engineer on NVIDIA’s CUDA Algorithms team, has more than twenty years of industry experience optimizing code performance for firms including Adobe, Microsoft, Google, and Autodesk.
CUDA Tutorial 01 : Say Hello to CUDA : NotesSubhajit Sahu
Highlighted notes of:
CUDA Tutorial 01 : Say Hello to CUDA
Written by: Putt Sakdhnagool
https://cuda-tutorial.readthedocs.io/en/latest/tutorials/tutorial01/
CUDA Tutorial 02 : CUDA in Actions : NotesSubhajit Sahu
Highlighted notes of:
CUDA Tutorial 02 : CUDA in Actions
Written by: Putt Sakdhnagool
https://cuda-tutorial.readthedocs.io/en/latest/tutorials/tutorial02/
CUDA by Example : Thread Cooperation : NotesSubhajit Sahu
Highlighted notes of:
Chapter 5: Thread Cooperation
Book:
CUDA by Example
An Introduction to General Purpose GPU Computing
Authors:
Jason Sanders
Edward Kandrot
“This book is required reading for anyone working with accelerator-based computing systems.”
–From the Foreword by Jack Dongarra, University of Tennessee and Oak Ridge National Laboratory
CUDA is a computing architecture designed to facilitate the development of parallel programs. In conjunction with a comprehensive software platform, the CUDA Architecture enables programmers to draw on the immense power of graphics processing units (GPUs) when building high-performance applications. GPUs, of course, have long been available for demanding graphics and game applications. CUDA now brings this valuable resource to programmers working on applications in other domains, including science, engineering, and finance. No knowledge of graphics programming is required–just the ability to program in a modestly extended version of C.
CUDA by Example, written by two senior members of the CUDA software platform team, shows programmers how to employ this new technology. The authors introduce each area of CUDA development through working examples. After a concise introduction to the CUDA platform and architecture, as well as a quick-start guide to CUDA C, the book details the techniques and trade-offs associated with each key CUDA feature. You’ll discover when to use each CUDA C extension and how to write CUDA software that delivers truly outstanding performance.
Table of Contents
Why CUDA? Why Now?
Getting Started
Introduction to CUDA C
Parallel Programming in CUDA C
Thread Cooperation
Constant Memory and Events
Texture Memory
Graphics Interoperability
Atomics
Streams
CUDA C on Multiple GPUs
The Final Countdown
All the CUDA software tools you’ll need are freely available for download from NVIDIA.
Jason Sanders is a senior software engineer in NVIDIA’s CUDA Platform Group, helped develop early releases of CUDA system software and contributed to the OpenCL 1.0 Specification, an industry standard for heterogeneous computing. He has held positions at ATI Technologies, Apple, and Novell.
Edward Kandrot is a senior software engineer on NVIDIA’s CUDA Algorithms team, has more than twenty years of industry experience optimizing code performance for firms including Adobe, Microsoft, Google, and Autodesk.
This is a simple quantum circuit simulator example in Qsikit. This was run using Anaconda and Jupyter Notebook. This example demonstrates the first level of quantum circuit design, code and run in a notebook.
C++ How I learned to stop worrying and love metaprogrammingcppfrug
Cette présentation parcours quelques applications directes de la méta-programmation en C++(11/14) avec comme objectif de démontrer son utilité dans un cadre applicatif.
CUDA Tutorial 01 : Say Hello to CUDA : NotesSubhajit Sahu
Highlighted notes of:
CUDA Tutorial 01 : Say Hello to CUDA
Written by: Putt Sakdhnagool
https://cuda-tutorial.readthedocs.io/en/latest/tutorials/tutorial01/
CUDA Tutorial 02 : CUDA in Actions : NotesSubhajit Sahu
Highlighted notes of:
CUDA Tutorial 02 : CUDA in Actions
Written by: Putt Sakdhnagool
https://cuda-tutorial.readthedocs.io/en/latest/tutorials/tutorial02/
CUDA by Example : Thread Cooperation : NotesSubhajit Sahu
Highlighted notes of:
Chapter 5: Thread Cooperation
Book:
CUDA by Example
An Introduction to General Purpose GPU Computing
Authors:
Jason Sanders
Edward Kandrot
“This book is required reading for anyone working with accelerator-based computing systems.”
–From the Foreword by Jack Dongarra, University of Tennessee and Oak Ridge National Laboratory
CUDA is a computing architecture designed to facilitate the development of parallel programs. In conjunction with a comprehensive software platform, the CUDA Architecture enables programmers to draw on the immense power of graphics processing units (GPUs) when building high-performance applications. GPUs, of course, have long been available for demanding graphics and game applications. CUDA now brings this valuable resource to programmers working on applications in other domains, including science, engineering, and finance. No knowledge of graphics programming is required–just the ability to program in a modestly extended version of C.
CUDA by Example, written by two senior members of the CUDA software platform team, shows programmers how to employ this new technology. The authors introduce each area of CUDA development through working examples. After a concise introduction to the CUDA platform and architecture, as well as a quick-start guide to CUDA C, the book details the techniques and trade-offs associated with each key CUDA feature. You’ll discover when to use each CUDA C extension and how to write CUDA software that delivers truly outstanding performance.
Table of Contents
Why CUDA? Why Now?
Getting Started
Introduction to CUDA C
Parallel Programming in CUDA C
Thread Cooperation
Constant Memory and Events
Texture Memory
Graphics Interoperability
Atomics
Streams
CUDA C on Multiple GPUs
The Final Countdown
All the CUDA software tools you’ll need are freely available for download from NVIDIA.
Jason Sanders is a senior software engineer in NVIDIA’s CUDA Platform Group, helped develop early releases of CUDA system software and contributed to the OpenCL 1.0 Specification, an industry standard for heterogeneous computing. He has held positions at ATI Technologies, Apple, and Novell.
Edward Kandrot is a senior software engineer on NVIDIA’s CUDA Algorithms team, has more than twenty years of industry experience optimizing code performance for firms including Adobe, Microsoft, Google, and Autodesk.
This is a simple quantum circuit simulator example in Qsikit. This was run using Anaconda and Jupyter Notebook. This example demonstrates the first level of quantum circuit design, code and run in a notebook.
C++ How I learned to stop worrying and love metaprogrammingcppfrug
Cette présentation parcours quelques applications directes de la méta-programmation en C++(11/14) avec comme objectif de démontrer son utilité dans un cadre applicatif.
Mistakes to avoid when designing DLLs and thoughts about other platforms to deliever with your DLL.
Some compilers provide mechanisms to automatically export all functions and variables in a library that have external linkage. Avoid using any such mechanisms. Export exactly the interface that you need to export, and no more.
Highlighted notes of:
Introduction to CUDA C: NVIDIA
Author: Blaise Barney
From: GPU Clusters, Lawrence Livermore National Laboratory
https://computing.llnl.gov/tutorials/linux_clusters/gpu/NVIDIA.Introduction_to_CUDA_C.1.pdf
Blaise Barney is a research scientist at Lawrence Livermore National Laboratory.
SIMD machines — machines capable of evaluating the same instruction on several elements of data in parallel — are nowadays commonplace and diverse, be it in supercomputers, desktop computers or even mobile ones. Numerous tools and libraries can make use of that technology to speed up their computations, yet it could be argued that there is no library that provides a satisfying minimalistic, high-level and platform-agnostic interface for the C++ developer.
Доклад рассказывает об устройстве и опыте применения инструментов динамического тестирования C/C++ программ — AddressSanitizer, ThreadSanitizer и MemorySanitizer. Инструменты находят такие ошибки, как использование памяти после освобождения, обращения за границы массивов и объектов, гонки в многопоточных программах и использования неинициализированной памяти.
Azul Virtual Machine Engineer Douglas Hawkins describes how decisions made by the JVM affect how your code is compiled and run. Learn how this affects application performance and what steps you can take to optimize how the JVM acts on your code.
Multithreading with modern C++ is hard. Undefined variables, Deadlocks, Livelocks, Race Conditions, Spurious Wakeups, the Double Checked Locking Pattern, etc. And at the base is the new Memory-Modell which make the life not easier. The story of things which can go wrong is very long. In this talk I give you a tour through the things which can go wrong and show how you can avoid them.
CUDA First Programs: Computer Architecture CSE448 : UAA Alaska : NotesSubhajit Sahu
Highlighted notes of:
CUDA First Programs
Course: Computer Architecture CSE448 (2010)
Instructor: Dr. Kenrick Mock
University of Alaska Anchorage
Kenrick Mock is a Professor of Computer Science and chair of the Department of Computer Science and Engineering. Kenrick has taught over 20 courses and conducts research in artificial intelligence, complex systems, computer security, eye tracking, systems modeling, and computer science education. He has over 20 years of programming experience and has co-authored several books on computer programming. He is the past chair of the Task Force for Undergraduate Research, a committee that seeks to promote research experiences for undergraduates through grants, student activities, curriculum, and faculty development.
20145-5SumII_CSC407_assign1.htmlCSC 407 Computer Systems II.docxeugeniadean34240
20145-5SumII_CSC407_assign1.html
CSC 407: Computer Systems II: 2015 Summer II, Assignment #1
Last Modified 2015 July 21Purpose:
To go over issues related to how the compiler and the linker
serve you, the programmer.
Computing
Please ssh into ctilinux1.cstcis.cti.depaul.edu, or use your own Linux machine.
Compiler optimization (45 Points)
Consider the following program.
/* q1.c
*/
#include <stdlib.h>
#include <stdio.h>
#define unsigned int uint
#define LENGTH ((uint) 512*64)
int initializeArray (uint len,
int* intArray
)
{
uint i;
for (i = 0; i < len; i++)
intArray[i] = (rand() % 64);
}
uint countAdjacent (int maxIndex,
int* intArray,
int direction
)
{
uint i;
uint sum = 0;
for (i = 0; i < maxIndex; i++)
if ( ( intArray[i] == (intArray[i+1] + direction) ) &&
( intArray[i] == (intArray[i+2] + 2*direction) )
)
sum++;
return(sum);
}
uint funkyFunction (uint len,
int* intArray
)
{
uint i;
uint sum = 0;
for (i = 0; i < len-1; i++)
if ( (i % 8) == 0x3 )
sum += 7*countAdjacent(len-2,intArray,+1);
else
sum += 17*countAdjacent(len-2,intArray,-1);
return(sum);
}
int main ()
{
int* intArray = (int*)calloc(LENGTH,sizeof(int));
initializeArray(LENGTH,intArray);
printf("funkyFunction() == %d\n",funkyFunction(LENGTH,intArray));
free(intArray);
return(EXIT_SUCCESS);
}
(8 Points) Compile it for profiling but with no extra optimization with:
$ gcc -o q1None -pg q1.c # Compiles q1.c to write q1None to make profile info
$ ./q1None # Runs q1None
$ gprof q1None # Gives profile info on q1None
Be sure to scroll all the way to the top of gprof output!
What are the number of self seconds taken by:
FunctionSelf secondsinitializeBigArray()__________countAdjaceent()__________funkyFunction()__________
(8 Points)
How did it do the operation (i % 8) == 0x3?
Was it done as a modulus (the same as an expensive division, but returns the remainder instead of the quotient) or something else?
Show the assembly language for this C code
using gdb to dissassemble
funkyFunction() of q1None.
Hint: do:
$ gdb q1None
. . .
(gdb) disass funkyFunction
Dump of assembler code for function funkyFunction:
. . .
and then look for the code that sets up the calls to countAdjacent().
The (i % 8) == 0x3 test is done before either countAdjacent() call.
(8 Points) Compile it for profiling but with optimization with:
$ gcc -o q1Compiler -O1 -pg q1.c # Compiles q1.c to write q1Compiler to make profile info
$ ./q1Compiler # Runs q1Compiler
$ gprof q1Compiler # Gives profile info on q1Compiler
What are the number of self seconds taken by:
FunctionSelf secondsinitializeBigArray()__________countAdjacent()__________funkyFunction()__________(8 Points) Use gdb to dissassemble countAdjacent() of both q1None and q1.
Mistakes to avoid when designing DLLs and thoughts about other platforms to deliever with your DLL.
Some compilers provide mechanisms to automatically export all functions and variables in a library that have external linkage. Avoid using any such mechanisms. Export exactly the interface that you need to export, and no more.
Highlighted notes of:
Introduction to CUDA C: NVIDIA
Author: Blaise Barney
From: GPU Clusters, Lawrence Livermore National Laboratory
https://computing.llnl.gov/tutorials/linux_clusters/gpu/NVIDIA.Introduction_to_CUDA_C.1.pdf
Blaise Barney is a research scientist at Lawrence Livermore National Laboratory.
SIMD machines — machines capable of evaluating the same instruction on several elements of data in parallel — are nowadays commonplace and diverse, be it in supercomputers, desktop computers or even mobile ones. Numerous tools and libraries can make use of that technology to speed up their computations, yet it could be argued that there is no library that provides a satisfying minimalistic, high-level and platform-agnostic interface for the C++ developer.
Доклад рассказывает об устройстве и опыте применения инструментов динамического тестирования C/C++ программ — AddressSanitizer, ThreadSanitizer и MemorySanitizer. Инструменты находят такие ошибки, как использование памяти после освобождения, обращения за границы массивов и объектов, гонки в многопоточных программах и использования неинициализированной памяти.
Azul Virtual Machine Engineer Douglas Hawkins describes how decisions made by the JVM affect how your code is compiled and run. Learn how this affects application performance and what steps you can take to optimize how the JVM acts on your code.
Multithreading with modern C++ is hard. Undefined variables, Deadlocks, Livelocks, Race Conditions, Spurious Wakeups, the Double Checked Locking Pattern, etc. And at the base is the new Memory-Modell which make the life not easier. The story of things which can go wrong is very long. In this talk I give you a tour through the things which can go wrong and show how you can avoid them.
CUDA First Programs: Computer Architecture CSE448 : UAA Alaska : NotesSubhajit Sahu
Highlighted notes of:
CUDA First Programs
Course: Computer Architecture CSE448 (2010)
Instructor: Dr. Kenrick Mock
University of Alaska Anchorage
Kenrick Mock is a Professor of Computer Science and chair of the Department of Computer Science and Engineering. Kenrick has taught over 20 courses and conducts research in artificial intelligence, complex systems, computer security, eye tracking, systems modeling, and computer science education. He has over 20 years of programming experience and has co-authored several books on computer programming. He is the past chair of the Task Force for Undergraduate Research, a committee that seeks to promote research experiences for undergraduates through grants, student activities, curriculum, and faculty development.
20145-5SumII_CSC407_assign1.htmlCSC 407 Computer Systems II.docxeugeniadean34240
20145-5SumII_CSC407_assign1.html
CSC 407: Computer Systems II: 2015 Summer II, Assignment #1
Last Modified 2015 July 21Purpose:
To go over issues related to how the compiler and the linker
serve you, the programmer.
Computing
Please ssh into ctilinux1.cstcis.cti.depaul.edu, or use your own Linux machine.
Compiler optimization (45 Points)
Consider the following program.
/* q1.c
*/
#include <stdlib.h>
#include <stdio.h>
#define unsigned int uint
#define LENGTH ((uint) 512*64)
int initializeArray (uint len,
int* intArray
)
{
uint i;
for (i = 0; i < len; i++)
intArray[i] = (rand() % 64);
}
uint countAdjacent (int maxIndex,
int* intArray,
int direction
)
{
uint i;
uint sum = 0;
for (i = 0; i < maxIndex; i++)
if ( ( intArray[i] == (intArray[i+1] + direction) ) &&
( intArray[i] == (intArray[i+2] + 2*direction) )
)
sum++;
return(sum);
}
uint funkyFunction (uint len,
int* intArray
)
{
uint i;
uint sum = 0;
for (i = 0; i < len-1; i++)
if ( (i % 8) == 0x3 )
sum += 7*countAdjacent(len-2,intArray,+1);
else
sum += 17*countAdjacent(len-2,intArray,-1);
return(sum);
}
int main ()
{
int* intArray = (int*)calloc(LENGTH,sizeof(int));
initializeArray(LENGTH,intArray);
printf("funkyFunction() == %d\n",funkyFunction(LENGTH,intArray));
free(intArray);
return(EXIT_SUCCESS);
}
(8 Points) Compile it for profiling but with no extra optimization with:
$ gcc -o q1None -pg q1.c # Compiles q1.c to write q1None to make profile info
$ ./q1None # Runs q1None
$ gprof q1None # Gives profile info on q1None
Be sure to scroll all the way to the top of gprof output!
What are the number of self seconds taken by:
FunctionSelf secondsinitializeBigArray()__________countAdjaceent()__________funkyFunction()__________
(8 Points)
How did it do the operation (i % 8) == 0x3?
Was it done as a modulus (the same as an expensive division, but returns the remainder instead of the quotient) or something else?
Show the assembly language for this C code
using gdb to dissassemble
funkyFunction() of q1None.
Hint: do:
$ gdb q1None
. . .
(gdb) disass funkyFunction
Dump of assembler code for function funkyFunction:
. . .
and then look for the code that sets up the calls to countAdjacent().
The (i % 8) == 0x3 test is done before either countAdjacent() call.
(8 Points) Compile it for profiling but with optimization with:
$ gcc -o q1Compiler -O1 -pg q1.c # Compiles q1.c to write q1Compiler to make profile info
$ ./q1Compiler # Runs q1Compiler
$ gprof q1Compiler # Gives profile info on q1Compiler
What are the number of self seconds taken by:
FunctionSelf secondsinitializeBigArray()__________countAdjacent()__________funkyFunction()__________(8 Points) Use gdb to dissassemble countAdjacent() of both q1None and q1.
"Optimization of a .NET application- is it simple ! / ?", Yevhen TatarynovFwdays
Optimization of .NET application seems complex and tied full task, but don’t hurry up with conclusions. Let’s look on several cases from real projects.
For this we:
look under the hood of an application from a real project;
define the metric for optimization;
choose the necessary tools;
find bottlenecks /memory leaks and best practice to resolve them.
We'll improve the application step by step and we’ll what with simple analysis and simple best practice we can significantly reduce total resources usage.
Modify this code to use multiple threads with the same data1.Modif.pdfmallik3000
Modify this code to use multiple threads with the same data
1.Modify the main function to implement a loop that reads 10 integers from the console (user
input) and stores these numbers in a one-dimensional (1D) array (this code will go right after the
comment that says “Add code to perform any needed initialization or to process user input”).
You should use a global array for this.
2.Implement a separatepthread function function for each one of the following operations:
a.Count and print out how many of the entered numbers are negative. This function must be
named countNegatives
b.Calculate and print the average value of all the numbers entered. This function must be named
average
c.Print the numbers in reverse order from the order in which they were entered. This function
must be named reverse
3.Modify the main function to create one pthread for each one of the functions that you
implemented in (3) above (this code will go between the comment that says “TODO: Modify
according to assignment requirements” and the “if (rc)” check).
Compile your program and run it several times. If the output of your program is garbled, you
may need to add a small delay in between creating the next thread.
#include
#include
using namespace std;
void *routineName(void *arg)
{
// TODO: Add code that implements
// the thread\'s functionality
cout << \"Thread is running...\" << endl;
return 0;
}
int main()
{
pthread_t id;
int rc;
int ints;
for(int x; x<10; x++)
{
cout << \"Enter Integer: \" <>
}
rc = pthread_create(&id, NULL, routineName, NULL);
if (rc){
cout << \"ERROR; return code from pthread_create() is \" << rc << endl;
return -1;
}
pthread_exit(0);
}
Solution
#include
#include
#include
using namespace std;
int ints[10];
//Method to count negatives in the input array
void *countNegatives(void*){
int count = 0;
for(int i=0;i<10;i++){
if(ints[i]<0){
count++;
}
}
cout<<\"Number of negatives: \"<=0;i--){
cout<> ints[x];
}
//Creating pthread to countNegatives
rc = pthread_create(&t1, NULL, countNegatives, NULL);
if(rc){
cout << \"Error:unable to create thread,\" << rc << endl;
}
//Adding delay
sleep(1);
//Creating pthread to compute average
rc = pthread_create(&t2, NULL, average, NULL);
if(rc){
cout << \"Error:unable to create thread,\" << rc << endl;
}
//Adding delay
sleep(1);
//Creating pthread to print in reverse
rc = pthread_create(&t3, NULL, reverse, NULL);
if(rc){
cout << \"Error:unable to create thread,\" << rc << endl;
}
//Adding delay
sleep(1);
void* status;
//waiting for t1 to join
rc = pthread_join(t1,&status);
if(rc){
cout << \"Error:unable to join,\" << rc << endl;
}
//waiting for t2 to join
rc = pthread_join(t2,&status);
if(rc){
cout << \"Error:unable to join,\" << rc << endl;
}
//waiting for t3 to join
rc = pthread_join(t3,&status);
if(rc){
cout << \"Error:unable to join,\" << rc << endl;
}
pthread_exit(0);
}.
We describe ocl, a Python library built on top of pyOpenCL and numpy. It allows programming
GPU devices using Python. Python functions which are marked up using the provided
decorator, are converted into C99/OpenCL and compiled using the JIT at runtime. This
approach lowers the barrier to entry to programming GPU devices since it requires only
Python syntax and no external compilation or linking steps. The resulting Python program runs
even if a GPU is not available. As an example of application, we solve the problem of
computing the covariance matrix for historical stock prices and determining the optimal
portfolio according to Modern Portfolio Theory
We describe ocl, a Python library built on top of pyOpenCL and numpy. It allows programming GPU devices using Python. Python functions which are marked up using the provided decorator, are converted into C99/OpenCL and compiled using the JIT at runtime. This approach lowers the barrier to entry to programming GPU devices since it requires only
Python syntax and no external compilation or linking steps. The resulting Python program runs
even if a GPU is not available. As an example of application, we solve the problem of computing the covariance matrix for historical stock prices and determining the optimal portfolio according to Modern Portfolio Theory.
A glimpse at some of the new features for the C++ programming languages that will be introduced by the upcoming C++17 Standard.
This talk was given at the Munich C++ User Group Meetup.
cpp-streams.ppt,C++ is the top choice of many programmers for creating powerf...bhargavi804095
C++ is the top choice of many programmers for creating powerful and scalable applications. From operating systems to video games, C++ is the proven language for delivering high-performance solutions across a range of industries.
One of the standout features of C++ is its built-in support of streams. C++ makes it easy to channel data in and out of your programs like a pro. Whether you’re pushing data out to cout or pulling it in from cin, C++ streams are the key to keeping your code in the zone.
In C language, you may use functions without defining them. Pay attention that I speak about C language, not C++. Of course, this ability is very dangerous. Let us have a look at an interesting example of a 64-bit error related to it.
In this slide, I introduce how I implement RSA256 algorithm with verilog and verify with verilator.
The project use C++ to build the C-model and SystemC model.
To help build the model, we create a C++ class vint to simulate the behavior of Verilog. It supports normal Verilog operation with more strict rules.
The systemC model can be directly translated into Verilog, so the intention of Verilog design is quite clear and concise.
To simplify the simulation, we limit our module to be one input port and one output port. The port uses the valid/ready protocol to control the data flow, which can be modeled as sc_fifo in systemC.
With these abstraction, we can easily implement unit test for all of our modules, and make sure they act as what we want.
----
Please access the source code at:
https://github.com/yodalee/rsa256
Implementasi Pemodelan Sistem Ke TeeChartLusiana Diyan
TeeChart adalah komponen pembuatan bagan tujuan umum yang dirancang untuk digunakan dalam lingkup yang berbeda, menawarkan berbagai macam estetika untuk memetakan data. Umumnya TeeCharts diterbitkan di lapangan, di area di mana sejumlah besar data harus diinterpretasikan secara teratur, tetap berdasarkan pilihan desainer dalam bentuk yang paling sederhana untuk memaksimalkan "rasio data-tinta". Sloan Digital Sky Survey , penggunaan SDSS Web Services untuk memetakan "Ilmiah .. merencanakan data online" di The Virtual Observatory Spectrum Services mencerminkan pendekatan itu. Penulis grafik SDSS memilih untuk mewakili data menggunakan tampilan garis 2D standar TeeChart. Kecepatan juga merupakan faktor saat memilih cara paling efektif memplot data. Data waktu nyata, pada frekuensi hingga puluhan atau ratusan titik data atau lebih per detik, memerlukan pendekatan ekonomis prosesor paling banyak untuk pembuatan bagan. Waktu pemrosesan komputer yang didedikasikan untuk plotting data harus seringan mungkin, membebaskan tugas-tugas komputer "untuk mencapai akuisisi, tampilan, dan analisis data secara real-time".
C++20 comes with some big new language features: modules, coroutines, concepts, spaceship, and many new libraries. But apart from all those, C++20 also offers many small language improvements, making C++ more powerful and expressive, but also safer and more consistent. This talk is an overview over all those smaller additions to the core language that will make your life easier. We will discuss much-needed improvements to existing facilities such as lambdas, CTAD, structured bindings, and initialisation, as well as brand-new language utilities that you may not yet have heard about!
Similar to CUDA by Example : Parallel Programming in CUDA C : Notes (20)
About TrueTime, Spanner, Clock synchronization, CAP theorem, Two-phase lockin...Subhajit Sahu
TrueTime is a service that enables the use of globally synchronized clocks, with bounded error. It returns a time interval that is guaranteed to contain the clock’s actual time for some time during the call’s execution. If two intervals do not overlap, then we know calls were definitely ordered in real time. In general, synchronized clocks can be used to avoid communication in a distributed system.
The underlying source of time is a combination of GPS receivers and atomic clocks. As there are “time masters” in every datacenter (redundantly), it is likely that both sides of a partition would continue to enjoy accurate time. Individual nodes however need network connectivity to the masters, and without it their clocks will drift. Thus, during a partition their intervals slowly grow wider over time, based on bounds on the rate of local clock drift. Operations depending on TrueTime, such as Paxos leader election or transaction commits, thus have to wait a little longer, but the operation still completes (assuming the 2PC and quorum communication are working).
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...Subhajit Sahu
Abstract — Levelwise PageRank is an alternative method of PageRank computation which decomposes the input graph into a directed acyclic block-graph of strongly connected components, and processes them in topological order, one level at a time. This enables calculation for ranks in a distributed fashion without per-iteration communication, unlike the standard method where all vertices are processed in each iteration. It however comes with a precondition of the absence of dead ends in the input graph. Here, the native non-distributed performance of Levelwise PageRank was compared against Monolithic PageRank on a CPU as well as a GPU. To ensure a fair comparison, Monolithic PageRank was also performed on a graph where vertices were split by components. Results indicate that Levelwise PageRank is about as fast as Monolithic PageRank on the CPU, but quite a bit slower on the GPU. Slowdown on the GPU is likely caused by a large submission of small workloads, and expected to be non-issue when the computation is performed on massive graphs.
Adjusting Bitset for graph : SHORT REPORT / NOTESSubhajit Sahu
Compressed Sparse Row (CSR) is an adjacency-list based graph representation that is commonly used for efficient graph computations. Unfortunately, using CSR for dynamic graphs is impractical since addition/deletion of a single edge can require on average (N+M)/2 memory accesses, in order to update source-offsets and destination-indices. A common approach is therefore to store edge-lists/destination-indices as an array of arrays, where each edge-list is an array belonging to a vertex. While this is good enough for small graphs, it quickly becomes a bottleneck for large graphs. What causes this bottleneck depends on whether the edge-lists are sorted or unsorted. If they are sorted, checking for an edge requires about log(E) memory accesses, but adding an edge on average requires E/2 accesses, where E is the number of edges of a given vertex. Note that both addition and deletion of edges in a dynamic graph require checking for an existing edge, before adding or deleting it. If edge lists are unsorted, checking for an edge requires around E/2 memory accesses, but adding an edge requires only 1 memory access.
Techniques to optimize the pagerank algorithm usually fall in two categories. One is to try reducing the work per iteration, and the other is to try reducing the number of iterations. These goals are often at odds with one another. Skipping computation on vertices which have already converged has the potential to save iteration time. Skipping in-identical vertices, with the same in-links, helps reduce duplicate computations and thus could help reduce iteration time. Road networks often have chains which can be short-circuited before pagerank computation to improve performance. Final ranks of chain nodes can be easily calculated. This could reduce both the iteration time, and the number of iterations. If a graph has no dangling nodes, pagerank of each strongly connected component can be computed in topological order. This could help reduce the iteration time, no. of iterations, and also enable multi-iteration concurrency in pagerank computation. The combination of all of the above methods is the STICD algorithm. [sticd] For dynamic graphs, unchanged components whose ranks are unaffected can be skipped altogether.
Adjusting primitives for graph : SHORT REPORT / NOTESSubhajit Sahu
Graph algorithms, like PageRank Compressed Sparse Row (CSR) is an adjacency-list based graph representation that is
Multiply with different modes (map)
1. Performance of sequential execution based vs OpenMP based vector multiply.
2. Comparing various launch configs for CUDA based vector multiply.
Sum with different storage types (reduce)
1. Performance of vector element sum using float vs bfloat16 as the storage type.
Sum with different modes (reduce)
1. Performance of sequential execution based vs OpenMP based vector element sum.
2. Performance of memcpy vs in-place based CUDA based vector element sum.
3. Comparing various launch configs for CUDA based vector element sum (memcpy).
4. Comparing various launch configs for CUDA based vector element sum (in-place).
Sum with in-place strategies of CUDA mode (reduce)
1. Comparing various launch configs for CUDA based vector element sum (in-place).
Experiments with Primitive operations : SHORT REPORT / NOTESSubhajit Sahu
This includes:
- Multiply with different modes (map)
1. Performance of sequential execution based vs OpenMP based vector multiply.
2. Comparing various launch configs for CUDA based vector multiply.
- Sum with different storage types (reduce)
1. Performance of vector element sum using float vs bfloat16 as the storage type.
- Sum with different modes (reduce)
1. Performance of sequential execution based vs OpenMP based vector element sum.
2. Performance of memcpy vs in-place based CUDA based vector element sum.
3. Comparing various launch configs for CUDA based vector element sum (memcpy).
4. Comparing various launch configs for CUDA based vector element sum (in-place).
- Sum with in-place strategies of CUDA mode (reduce)
1. Comparing various launch configs for CUDA based vector element sum (in-place).
Techniques to optimize the pagerank algorithm usually fall in two categories. One is to try reducing the work per iteration, and the other is to try reducing the number of iterations. These goals are often at odds with one another. Skipping computation on vertices which have already converged has the potential to save iteration time. Skipping in-identical vertices, with the same in-links, helps reduce duplicate computations and thus could help reduce iteration time. Road networks often have chains which can be short-circuited before pagerank computation to improve performance. Final ranks of chain nodes can be easily calculated. This could reduce both the iteration time, and the number of iterations. If a graph has no dangling nodes, pagerank of each strongly connected component can be computed in topological order. This could help reduce the iteration time, no. of iterations, and also enable multi-iteration concurrency in pagerank computation. The combination of all of the above methods is the STICD algorithm. [sticd] For dynamic graphs, unchanged components whose ranks are unaffected can be skipped altogether.
Adjusting OpenMP PageRank : SHORT REPORT / NOTESSubhajit Sahu
For massive graphs that fit in RAM, but not in GPU memory, it is possible to take
advantage of a shared memory system with multiple CPUs, each with multiple cores, to
accelerate pagerank computation. If the NUMA architecture of the system is properly taken
into account with good vertex partitioning, the speedup can be significant. To take steps in
this direction, experiments are conducted to implement pagerank in OpenMP using two
different approaches, uniform and hybrid. The uniform approach runs all primitives required
for pagerank in OpenMP mode (with multiple threads). On the other hand, the hybrid
approach runs certain primitives in sequential mode (i.e., sumAt, multiply).
word2vec, node2vec, graph2vec, X2vec: Towards a Theory of Vector Embeddings o...Subhajit Sahu
Below are the important points I note from the 2020 paper by Martin Grohe:
- 1-WL distinguishes almost all graphs, in a probabilistic sense
- Classical WL is two dimensional Weisfeiler-Leman
- DeepWL is an unlimited version of WL graph that runs in polynomial time.
- Knowledge graphs are essentially graphs with vertex/edge attributes
ABSTRACT:
Vector representations of graphs and relational structures, whether handcrafted feature vectors or learned representations, enable us to apply standard data analysis and machine learning techniques to the structures. A wide range of methods for generating such embeddings have been studied in the machine learning and knowledge representation literature. However, vector embeddings have received relatively little attention from a theoretical point of view.
Starting with a survey of embedding techniques that have been used in practice, in this paper we propose two theoretical approaches that we see as central for understanding the foundations of vector embeddings. We draw connections between the various approaches and suggest directions for future research.
DyGraph: A Dynamic Graph Generator and Benchmark Suite : NOTESSubhajit Sahu
https://gist.github.com/wolfram77/54c4a14d9ea547183c6c7b3518bf9cd1
There exist a number of dynamic graph generators. Barbasi-Albert model iteratively attach new vertices to pre-exsiting vertices in the graph using preferential attachment (edges to high degree vertices are more likely - rich get richer - Pareto principle). However, graph size increases monotonically, and density of graph keeps increasing (sparsity decreasing).
Gorke's model uses a defined clustering to uniformly add vertices and edges. Purohit's model uses motifs (eg. triangles) to mimick properties of existing dynamic graphs, such as growth rate, structure, and degree distribution. Kronecker graph generators are used to increase size of a given graph, with power-law distribution.
To generate dynamic graphs, we must choose a metric to compare two graphs. Common metrics include diameter, clustering coefficient (modularity?), triangle counting (triangle density?), and degree distribution.
In this paper, the authors propose Dygraph, a dynamic graph generator that uses degree distribution as the only metric. The authors observe that many real-world graphs differ from the power-law distribution at the tail end. To address this issue, they propose binning, where the vertices beyond a certain degree (minDeg = min(deg) s.t. |V(deg)| < H, where H~10 is the number of vertices with a given degree below which are binned) are grouped into bins of degree-width binWidth, max-degree localMax, and number of degrees in bin with at least one vertex binSize (to keep track of sparsity). This helps the authors to generate graphs with a more realistic degree distribution.
The process of generating a dynamic graph is as follows. First the difference between the desired and the current degree distribution is calculated. The authors then create an edge-addition set where each vertex is present as many times as the number of additional incident edges it must recieve. Edges are then created by connecting two vertices randomly from this set, and removing both from the set once connected. Currently, authors reject self-loops and duplicate edges. Removal of edges is done in a similar fashion.
Authors observe that adding edges with power-law properties dominates the execution time, and consider parallelizing DyGraph as part of future work.
My notes on shared memory parallelism.
Shared memory is memory that may be simultaneously accessed by multiple programs with an intent to provide communication among them or avoid redundant copies. Shared memory is an efficient means of passing data between programs. Using memory for communication inside a single program, e.g. among its multiple threads, is also referred to as shared memory [REF].
A Dynamic Algorithm for Local Community Detection in Graphs : NOTESSubhajit Sahu
**Community detection methods** can be *global* or *local*. **Global community detection methods** divide the entire graph into groups. Existing global algorithms include:
- Random walk methods
- Spectral partitioning
- Label propagation
- Greedy agglomerative and divisive algorithms
- Clique percolation
https://gist.github.com/wolfram77/b4316609265b5b9f88027bbc491f80b6
There is a growing body of work in *detecting overlapping communities*. **Seed set expansion** is a **local community detection method** where a relevant *seed vertices* of interest are picked and *expanded to form communities* surrounding them. The quality of each community is measured using a *fitness function*.
**Modularity** is a *fitness function* which compares the number of intra-community edges to the expected number in a random-null model. **Conductance** is another popular fitness score that measures the community cut or inter-community edges. Many *overlapping community detection* methods **use a modified ratio** of intra-community edges to all edges with atleast one endpoint in the community.
Andersen et al. use a **Spectral PageRank-Nibble method** which minimizes conductance and is formed by adding vertices in order of decreasing PageRank values. Andersen and Lang develop a **random walk approach** in which some vertices in the seed set may not be placed in the final community. Clauset gives a **greedy method** that *starts from a single vertex* and then iteratively adds neighboring vertices *maximizing the local modularity score*. Riedy et al. **expand multiple vertices** via maximizing modularity.
Several algorithms for **detecting global, overlapping communities** use a *greedy*, *agglomerative approach* and run *multiple separate seed set expansions*. Lancichinetti et al. run **greedy seed set expansions**, each with a *single seed vertex*. Overlapping communities are produced by a sequentially running expansions from a node not yet in a community. Lee et al. use **maximal cliques as seed sets**. Havemann et al. **greedily expand cliques**.
The authors of this paper discuss a dynamic approach for **community detection using seed set expansion**. Simply marking the neighbours of changed vertices is a **naive approach**, and has *severe shortcomings*. This is because *communities can split apart*. The simple updating method *may fail even when it outputs a valid community* in the graph.
Scalable Static and Dynamic Community Detection Using Grappolo : NOTESSubhajit Sahu
A **community** (in a network) is a subset of nodes which are _strongly connected among themselves_, but _weakly connected to others_. Neither the number of output communities nor their size distribution is known a priori. Community detection methods can be divisive or agglomerative. **Divisive methods** use _betweeness centrality_ to **identify and remove bridges** between communities. **Agglomerative methods** greedily **merge two communities** that provide maximum gain in _modularity_. Newman and Girvan have introduced the **modularity metric**. The problem of community detection is then reduced to the problem of modularity maximization which is **NP-complete**. **Louvain method** is a variant of the _agglomerative strategy_, in that is a _multi-level heuristic_.
https://gist.github.com/wolfram77/917a1a4a429e89a0f2a1911cea56314d
In this paper, the authors discuss **four heuristics** for Community detection using the _Louvain algorithm_ implemented upon recently developed **Grappolo**, which is a parallel variant of the Louvain algorithm. They are:
- Vertex following and Minimum label
- Data caching
- Graph coloring
- Threshold scaling
With the **Vertex following** heuristic, the _input is preprocessed_ and all single-degree vertices are merged with their corresponding neighbours. This helps reduce the number of vertices considered in each iteration, and also help initial seeds of communities to be formed. With the **Minimum label heuristic**, when a vertex is making the decision to move to a community and multiple communities provided the same modularity gain, the community with the smallest id is chosen. This helps _minimize or prevent community swaps_. With the **Data caching** heuristic, community information is stored in a vector instead of a map, and is reused in each iteration, but with some additional cost. With the **Vertex ordering via Graph coloring** heuristic, _distance-k coloring_ of graphs is performed in order to group vertices into colors. Then, each set of vertices (by color) is processed _concurrently_, and synchronization is performed after that. This enables us to mimic the behaviour of the serial algorithm. Finally, with the **Threshold scaling** heuristic, _successively smaller values of modularity threshold_ are used as the algorithm progresses. This allows the algorithm to converge faster, and it has been observed a good modularity score as well.
From the results, it appears that _graph coloring_ and _threshold scaling_ heuristics do not always provide a speedup and this depends upon the nature of the graph. It would be interesting to compare the heuristics against baseline approaches. Future work can include _distributed memory implementations_, and _community detection on streaming graphs_.
Application Areas of Community Detection: A Review : NOTESSubhajit Sahu
This is a short review of Community detection methods (on graphs), and their applications. A **community** is a subset of a network whose members are *highly connected*, but *loosely connected* to others outside their community. Different community detection methods *can return differing communities* these algorithms are **heuristic-based**. **Dynamic community detection** involves tracking the *evolution of community structure* over time.
https://gist.github.com/wolfram77/09e64d6ba3ef080db5558feb2d32fdc0
Communities can be of the following **types**:
- Disjoint
- Overlapping
- Hierarchical
- Local.
The following **static** community detection **methods** exist:
- Spectral-based
- Statistical inference
- Optimization
- Dynamics-based
The following **dynamic** community detection **methods** exist:
- Independent community detection and matching
- Dependent community detection (evolutionary)
- Simultaneous community detection on all snapshots
- Dynamic community detection on temporal networks
**Applications** of community detection include:
- Criminal identification
- Fraud detection
- Criminal activities detection
- Bot detection
- Dynamics of epidemic spreading (dynamic)
- Cancer/tumor detection
- Tissue/organ detection
- Evolution of influence (dynamic)
- Astroturfing
- Customer segmentation
- Recommendation systems
- Social network analysis (both)
- Network summarization
- Privary, group segmentation
- Link prediction (both)
- Community evolution prediction (dynamic, hot field)
<br>
<br>
## References
- [Application Areas of Community Detection: A Review : PAPER](https://ieeexplore.ieee.org/document/8625349)
This paper discusses a GPU implementation of the Louvain community detection algorithm. Louvain algorithm obtains hierachical communities as a dendrogram through modularity optimization. Given an undirected weighted graph, all vertices are first considered to be their own communities. In the first phase, each vertex greedily decides to move to the community of one of its neighbours which gives greatest increase in modularity. If moving to no neighbour's community leads to an increase in modularity, the vertex chooses to stay with its own community. This is done sequentially for all the vertices. If the total change in modularity is more than a certain threshold, this phase is repeated. Once this local moving phase is complete, all vertices have formed their first hierarchy of communities. The next phase is called the aggregation phase, where all the vertices belonging to a community are collapsed into a single super-vertex, such that edges between communities are represented as edges between respective super-vertices (edge weights are combined), and edges within each community are represented as self-loops in respective super-vertices (again, edge weights are combined). Together, the local moving and the aggregation phases constitute a stage. This super-vertex graph is then used as input fof the next stage. This process continues until the increase in modularity is below a certain threshold. As a result from each stage, we have a hierarchy of community memberships for each vertex as a dendrogram.
Approaches to perform the Louvain algorithm can be divided into coarse-grained and fine-grained. Coarse-grained approaches process a set of vertices in parallel, while fine-grained approaches process all vertices in parallel. A coarse-grained hybrid-GPU algorithm using multi GPUs has be implemented by Cheong et al. which grabbed my attention. In addition, their algorithm does not use hashing for the local moving phase, but instead sorts each neighbour list based on the community id of each vertex.
https://gist.github.com/wolfram77/7e72c9b8c18c18ab908ae76262099329
Survey for extra-child-process package : NOTESSubhajit Sahu
Useful additions to inbuilt child_process module.
📦 Node.js, 📜 Files, 📰 Docs.
Please see attached PDF for literature survey.
https://gist.github.com/wolfram77/d936da570d7bf73f95d1513d4368573e
Dynamic Batch Parallel Algorithms for Updating PageRank : POSTERSubhajit Sahu
For the PhD forum an abstract submission is required by 10th May, and poster by 15th May. The event is on 30th May.
https://gist.github.com/wolfram77/692d263f463fd49be6eb5aa65dd4d0f9
Abstract for IPDPS 2022 PhD Forum on Dynamic Batch Parallel Algorithms for Up...Subhajit Sahu
For the PhD forum an abstract submission is required by 10th May, and poster by 15th May. The event is on 30th May.
https://gist.github.com/wolfram77/1c1f730d20b51e0d2c6d477fd3713024
Fast Incremental Community Detection on Dynamic Graphs : NOTESSubhajit Sahu
In this paper, the authors describe two approaches for dynamic community detection using the CNM algorithm. CNM is a hierarchical, agglomerative algorithm that greedily maximizes modularity. They define two approaches: BasicDyn and FastDyn. BasicDyn backtracks merges of communities until each marked (changed) vertex is its own singleton community. FastDyn undoes a merge only if the quality of merge, as measured by the induced change in modularity, has significantly decreased compared to when the merge initially took place. FastDyn also allows more than two vertices to contract together if in the previous time step these vertices eventually ended up contracted in the same community. In the static case, merging several vertices together in one contraction phase could lead to deteriorating results. FastDyn is able to do this, however, because it uses information from the merges of the previous time step. Intuitively, merges that previously occurred are more likely to be acceptable later.
https://gist.github.com/wolfram77/1856b108334cc822cdddfdfa7334792a
Building a Raspberry Pi Robot with Dot NET 8, Blazor and SignalR - Slides Onl...Peter Gallagher
In this session delivered at Leeds IoT, I talk about how you can control a 3D printed Robot Arm with a Raspberry Pi, .NET 8, Blazor and SignalR.
I also show how you can use a Unity app on an Meta Quest 3 to control the arm VR too.
You can find the GitHub repo and workshop instructions here;
https://bit.ly/dotnetrobotgithub
Google Calendar is a versatile tool that allows users to manage their schedules and events effectively. With Google Calendar, you can create and organize calendars, set reminders for important events, and share your calendars with others. It also provides features like creating events, inviting attendees, and accessing your calendar from mobile devices. Additionally, Google Calendar allows you to embed calendars in websites or platforms like SlideShare, making it easier for others to view and interact with your schedules.
CUDA by Example : Parallel Programming in CUDA C : Notes
1. 37
Chapter 4
Parallel Programming
in CUDA C
In the previous chapter, we saw how simple it can be to write code that executes
on the GPU. We have even gone so far as to learn how to add two numbers
together, albeit just the numbers 2 and 7. Admittedly, that example was not
immensely impressive, nor was it incredibly interesting. But we hope you are
convinced that it is easy to get started with CUDA C and you’re excited to learn
more. Much of the promise of GPU computing lies in exploiting the massively
parallel structure of many problems. In this vein, we intend to spend this chapter
examining how to execute parallel code on the GPU using CUDA C.
2. C
38
Chapter Objectives
CUDA Parallel Programming
Previously, we saw how easy it was to get a standard C function to start running
on a device. By adding the __global__ qualifier to the function and by calling
it using a special angle bracket syntax, we executed the function on our GPU.
Although this was extremely simple, it was also extremely inefficient because
NVIDIA’s hardware engineering minions have optimized their graphics processors
to perform hundreds of computations in parallel. However, thus far we have only
ever launched a kernel that runs serially on the GPU. In this chapter, we see how
straightforward it is to launch a device kernel that performs its computations in
parallel.
summInG vectors4.2.1
We will contrive a simple example to illustrate threads and how we use them to
code with CUDA C. Imagine having two lists of numbers where we want to sum
corresponding elements of each list and store the result in a third list. Figure 4.1
shows this process. If you have any background in linear algebra, you will recog-
nize this operation as summing two vectors.
3. g
39
rogramming
c
a
b
Figure 4.1 Summing two vectors
cPu vector sums
First we’ll look at one way this addition can be accomplished with traditional C code:
#include "../common/book.h"
#define N 10
void add( int *a, int *b, int *c ) {
int tid = 0; // this is CPU zero, so we start at zero
while (tid < N) {
c[tid] = a[tid] + b[tid];
tid += 1; // we have one CPU, so we increment by one
}
}
int main( void ) {
int a[N], b[N], c[N];
// fill the arrays 'a' and 'b' on the CPU
for (int i=0; i<N; i++) {
a[i] = -i;
b[i] = i * i;
}
add( a, b, c );
4. C
40
// display the results
for (int i=0; i<N; i++) {
printf( "%d + %d = %dn", a[i], b[i], c[i] );
}
return 0;
}
Most of this example bears almost no explanation, but we will briefly look at the
add() function to explain why we overly complicated it.
void add( int *a, int *b, int *c ) {
int tid = 0; // this is CPU zero, so we start at zero
while (tid < N) {
c[tid] = a[tid] + b[tid];
tid += 1; // we have one CPU, so we increment by one
}
}
We compute the sum within a while loop where the index tid ranges from 0 to
N-1. We add corresponding elements of a[] and b[], placing the result in the
corresponding element of c[]. One would typically code this in a slightly simpler
manner, like so:
void add( int *a, int *b, int *c ) {
for (i=0; i < N; i++) {
c[i] = a[i] + b[i];
}
}
Our slightly more convoluted method was intended to suggest a potential way to
parallelize the code on a system with multiple CPUs or CPU cores. For example,
with a dual-core processor, one could change the increment to 2 and have one
core initialize the loop with tid = 0 and another with tid = 1. The first core
would add the even-indexed elements, and the second core would add the odd-
indexed elements. This amounts to executing the following code on each of the
two CPU cores:
5. g
41
rogramming
CPU Core 1 CPU Core 2
void add( int *a, int *b, int *c )
{
int tid = 0;
while (tid < N) {
c[tid] = a[tid] + b[tid];
tid += 2;
}
}
void add( int *a, int *b, int *c )
{
int tid = 1;
while (tid < N) {
c[tid] = a[tid] + b[tid];
tid += 2;
}
}
Of course, doing this on a CPU would require considerably more code than we
have included in this example. You would need to provide a reasonable amount of
infrastructure to create the worker threads that execute the function add() as
well as make the assumption that each thread would execute in parallel, a sched-
uling assumption that is unfortunately not always true.
GPu vector sums
We can accomplish the same addition very similarly on a GPU by writing add()
as a device function. This should look similar to code you saw in the previous
chapter. But before we look at the device code, we present main(). Although the
GPU implementation of main() is different from the corresponding CPU version,
nothing here should look new:
#include "../common/book.h"
#define N 10
int main( void ) {
int a[N], b[N], c[N];
int *dev_a, *dev_b, *dev_c;
// allocate the memory on the GPU
HANDLE_ERROR( cudaMalloc( (void**)&dev_a, N * sizeof(int) ) );
HANDLE_ERROR( cudaMalloc( (void**)&dev_b, N * sizeof(int) ) );
HANDLE_ERROR( cudaMalloc( (void**)&dev_c, N * sizeof(int) ) );
// fill the arrays 'a' and 'b' on the CPU
for (int i=0; i<N; i++) {
a[i] = -i;
b[i] = i * i;
}
6. C
42
cudaMalloc(): two
arrays, dev_a and dev_b, to hold inputs, and one array, dev_c, to hold the
result.
Because we are environmentally conscientious coders, we clean up after
add() from the host code in main() using the
triple angle bracket syntax.
7. g
43
rogramming
add() that executes on the device. We
accomplished this by taking C code and adding a __global__ qualifier to
the function name.
So far, there is nothing new in this example except it can do more than add 2 and
7. However, there are two noteworthy components of this example: The param-
eters within the triple angle brackets and the code contained in the kernel itself
both introduce new concepts.
Up to this point, we have always seen kernels launched in the following form:
kernel<<<1,1>>>( param1, param2, … );
But in this example we are launching with a number in the angle brackets that is
not 1:
add<<<N,1>>>( dev _ a, dev _ b, dev _ c );
What gives?
8. C
44
Recall that we left those two numbers in the angle brackets unexplained; we
stated vaguely that they were parameters to the runtime that describe how to
launch the kernel. Well, the first number in those parameters represents the
number of parallel blocks in which we would like the device to execute our kernel.
In this case, we’re passing the value N for this parameter.
For example, if we launch with kernel<<<2,1>>>(), you can think of the
runtime creating two copies of the kernel and running them in parallel. We call
each of these parallel invocations a block. With kernel<<<256,1>>>(), you
would get 256 blocks running on the GPU. Parallel programming has never been
easier.
But this raises an excellent question: The GPU runs N copies of our kernel code,
but how can we tell from within the code which block is currently running? This
question brings us to the second new feature of the example, the kernel code
itself. Specifically, it brings us to the variable blockIdx.x:
__global__ void add( int *a, int *b, int *c ) {
int tid = blockIdx.x; // handle the data at this index
if (tid < N)
c[tid] = a[tid] + b[tid];
}
At first glance, it looks like this variable should cause a syntax error at compile
time since we use it to assign the value of tid, but we have never defined it.
However, there is no need to define the variable blockIdx; this is one of the
built-in variables that the CUDA runtime defines for us. Furthermore, we use this
variable for exactly what it sounds like it means. It contains the value of the block
index for whichever block is currently running the device code.
Why, you may then ask, is it not just blockIdx? Why blockIdx.x? As it turns
out, CUDA C allows you to define a group of blocks in two dimensions. For prob-
lems with two-dimensional domains, such as matrix math or image processing,
it is often convenient to use two-dimensional indexing to avoid annoying transla-
tions from linear to rectangular indices. Don’t worry if you aren’t familiar with
these problem types; just know that using two-dimensional indexing can some-
times be more convenient than one-dimensional indexing. But you never have to
use it. We won’t be offended.
9. g
45
rogramming
When we launched the kernel, we specified N as the number of parallel blocks.
We call the collection of parallel blocks a grid. This specifies to the runtime
system that we want a one-dimensional grid of N blocks (scalar values are
interpreted as one-dimensional). These threads will have varying values for
blockIdx.x, the first taking value 0 and the last taking value N-1. So, imagine
four blocks, all running through the same copy of the device code but having
different values for the variable blockIdx.x. This is what the actual code being
executed in each of the four parallel blocks looks like after the runtime substi-
tutes the appropriate block index for blockIdx.x:
bloCK 1 bloCK 2
__global__ void
add( int *a, int *b, int *c ) {
int tid = 0;
if (tid < N)
c[tid] = a[tid] + b[tid];
}
__global__ void
add( int *a, int *b, int *c ) {
int tid = 1;
if (tid < N)
c[tid] = a[tid] + b[tid];
}
bloCK 3 bloCK 4
__global__ void
add( int *a, int *b, int *c ) {
int tid = 2;
if (tid < N)
c[tid] = a[tid] + b[tid];
}
__global__ void
add( int *a, int *b, int *c ) {
int tid = 3;
if (tid < N)
c[tid] = a[tid] + b[tid];
}
If you recall the CPU-based example with which we began, you will recall that we
needed to walk through indices from 0 to N-1 in order to sum the two vectors.
Since the runtime system is already launching a kernel where each block will
have one of these indices, nearly all of this work has already been done for us.
Because we’re something of a lazy lot, this is a good thing. It affords us more time
to blog, probably about how lazy we are.
The last remaining question to be answered is, why do we check whether tid
is less than N? It should always be less than N, since we’ve specifically launched
our kernel such that this assumption holds. But our desire to be lazy also makes
us paranoid about someone breaking an assumption we’ve made in our code.
Breaking code assumptions means broken code. This means bug reports, late
10. C
46
nights tracking down bad behavior, and generally lots of activities that stand
between us and our blog. If we didn’t check that tid is less than N and subse-
quently fetched memory that wasn’t ours, this would be bad. In fact, it could
possibly kill the execution of your kernel, since GPUs have sophisticated memory
management units that kill processes that seem to be violating memory rules.
If you encounter problems like the ones just mentioned, one of the HANDLE_
ERROR() macros that we’ve sprinkled so liberally throughout the code will
detect and alert you to the situation. As with traditional C programming, the
lesson here is that functions return error codes for a reason. Although it is
always tempting to ignore these error codes, we would love to save you the hours
of pain through which we have suffered by urging that you check the results of
every operation that can fail. As is often the case, the presence of these errors
will not prevent you from continuing the execution of your application, but they
will most certainly cause all manner of unpredictable and unsavory side effects
downstream.
At this point, you’re running code in parallel on the GPU. Perhaps you had heard
this was tricky or that you had to understand computer graphics to do general-
purpose programming on a graphics processor. We hope you are starting to see
how CUDA C makes it much easier to get started writing parallel code on a GPU.
We used the example only to sum vectors of length 10. If you would like to see
how easy it is to generate a massively parallel application, try changing the 10 in
the line #define N 10 to 10000 or 50000 to launch tens of thousands of parallel
blocks. Be warned, though: No dimension of your launch of blocks may exceed
65,535. This is simply a hardware-imposed limit, so you will start to see failures if
you attempt launches with more blocks than this. In the next chapter, we will see
how to work within this limitation.
A FUN EXAMPLE4.2.2
We don’t mean to imply that adding vectors is anything less than fun, but the
following example will satisfy those looking for some flashy examples of parallel
CUDA C.
The following example will demonstrate code to draw slices of the Julia Set. For
the uninitiated, the Julia Set is the boundary of a certain class of functions over
complex numbers. Undoubtedly, this sounds even less fun than vector addi-
tion and matrix multiplication. However, for almost all values of the function’s
11. g
47
rogramming
parameters, this boundary forms a fractal, one of the most interesting and beau-
tiful curiosities of mathematics.
The calculations involved in generating such a set are quite simple. At its heart,
the Julia Set evaluates a simple iterative equation for points in the complex plane.
A point is not in the set if the process of iterating the equation diverges for that
point. That is, if the sequence of values produced by iterating the equation grows
toward infinity, a point is considered outside the set. Conversely, if the values
taken by the equation remain bounded, the point is in the set.
Computationally, the iterative equation in question is remarkably simple, as
shown in Equation 4.1.
Equation 4.1
Computing an iteration of Equation 4.1 would therefore involve squaring the
current value and adding a constant to get the next value of the equation.
cPu JulIA set
We will examine a source listing now that will compute and visualize the Julia
Set. Since this is a more complicated program than we have studied so far, we will
split it into pieces here. Later in the chapter, you will see the entire source listing.
int main( void ) {
CPUBitmap bitmap( DIM, DIM );
unsigned char *ptr = bitmap.get_ptr();
kernel( ptr );
bitmap.display_and_exit();
}
Our main routine is remarkably simple. It creates the appropriate size bitmap
image using a utility library provided. Next, it passes a pointer to the bitmap data
to the kernel function.
12. C
48
void kernel( unsigned char *ptr ){
for (int y=0; y<DIM; y++) {
for (int x=0; x<DIM; x++) {
int offset = x + y * DIM;
int juliaValue = julia( x, y );
ptr[offset*4 + 0] = 255 * juliaValue;
ptr[offset*4 + 1] = 0;
ptr[offset*4 + 2] = 0;
ptr[offset*4 + 3] = 255;
}
}
}
The computation kernel does nothing more than iterate through all points we
care to render, calling julia()on each to determine membership in the Julia
Set. The function julia()will return 1 if the point is in the set and 0 if it is not
in the set. We set the point’s color to be red if julia()returns 1 and black if it
returns 0. These colors are arbitrary, and you should feel free to choose a color
scheme that matches your personal aesthetics.
int julia( int x, int y ) {
const float scale = 1.5;
float jx = scale * (float)(DIM/2 - x)/(DIM/2);
float jy = scale * (float)(DIM/2 - y)/(DIM/2);
cuComplex c(-0.8, 0.156);
cuComplex a(jx, jy);
int i = 0;
for (i=0; i<200; i++) {
a = a * a + c;
if (a.magnitude2() > 1000)
return 0;
}
return 1;
}
13. g
49
rogramming
This function is the meat of the example. We begin by translating our pixel
coordinate to a coordinate in complex space. To center the complex plane at the
image center, we shift by DIM/2. Then, to ensure that the image spans the range
of -1.0 to 1.0, we scale the image coordinate by DIM/2. Thus, given an image
point at (x,y), we get a point in complex space at ( (DIM/2 – x)/(DIM/2),
((DIM/2 – y)/(DIM/2) ).
Then, to potentially zoom in or out, we introduce a scale factor. Currently, the scale
is hard-coded to be 1.5, but you should tweak this parameter to zoom in or out. If you
are feeling really ambitious, you could make this a command-line parameter.
After obtaining the point in complex space, we then need to determine whether
the point is in or out of the Julia Set. If you recall the previous section, we do this
by computing the values of the iterative equation Zn+1
= zn
2
+ C. Since C is some
arbitrary complex-valued constant, we have chosen -0.8 + 0.156i because it
happens to yield an interesting picture. You should play with this constant if you
want to see other versions of the Julia Set.
In the example, we compute 200 iterations of this function. After each iteration,
we check whether the magnitude of the result exceeds some threshold (1,000 for
our purposes). If so, the equation is diverging, and we can return 0 to indicate that
the point is not in the set. On the other hand, if we finish all 200 iterations and the
magnitude is still bounded under 1,000, we assume that the point is in the set,
and we return 1 to the caller, kernel().
Since all the computations are being performed on complex numbers, we define
a generic structure to store complex numbers.
struct cuComplex {
float r;
float i;
cuComplex( float a, float b ) : r(a), i(b) {}
float magnitude2( void ) { return r * r + i * i; }
cuComplex operator*(const cuComplex& a) {
return cuComplex(r*a.r - i*a.i, i*a.r + r*a.i);
}
cuComplex operator+(const cuComplex& a) {
return cuComplex(r+a.r, i+a.i);
}
};
14. C
50
The class represents complex numbers with two data elements: a single-
precision real component r and a single-precision imaginary component i.
The class defines addition and multiplication operators that combine complex
numbers as expected. (If you are completely unfamiliar with complex numbers,
you can get a quick primer online.) Finally, we define a method that returns the
magnitude of the complex number.
GPu JulIA set
The device implementation is remarkably similar to the CPU version, continuing a
trend you may have noticed.
int main( void ) {
CPUBitmap bitmap( DIM, DIM );
unsigned char *dev_bitmap;
HANDLE_ERROR( cudaMalloc( (void**)&dev_bitmap,
bitmap.image_size() ) );
dim3 grid(DIM,DIM);
kernel<<<grid,1>>>( dev_bitmap );
HANDLE_ERROR( cudaMemcpy( bitmap.get_ptr(),
dev_bitmap,
bitmap.image_size(),
cudaMemcpyDeviceToHost ) );
bitmap.display_and_exit();
cudaFree( dev_bitmap );
}
This version of main() looks much more complicated than the CPU version, but
the flow is actually identical. Like with the CPU version, we create a DIM x DIM
15. g
51
rogramming
bitmap image using our utility library. But because we will be doing computa-
tion on a GPU, we also declare a pointer called dev_bitmap to hold a copy
of the data on the device. And to hold data, we need to allocate memory using
cudaMalloc().
We then run our kernel() function exactly like in the CPU version, although
now it is a __global__ function, meaning it will run on the GPU. As with the
CPU example, we pass kernel() the pointer we allocated in the previous line to
store the results. The only difference is that the memory resides on the GPU now,
not on the host system.
The most significant difference is that we specify how many parallel blocks on
which to execute the function kernel(). Because each point can be computed
independently of every other point, we simply specify one copy of the function for
each point we want to compute. We mentioned that for some problem domains,
it helps to use two-dimensional indexing. Unsurprisingly, computing function
values over a two-dimensional domain such as the complex plane is one of these
problems. So, we specify a two-dimensional grid of blocks in this line:
dim3 grid(DIM,DIM);
The type dim3 is not a standard C type, lest you feared you had forgotten some
key pieces of information. Rather, the CUDA runtime header files define some
convenience types to encapsulate multidimensional tuples. The type dim3 repre-
sents a three-dimensional tuple that will be used to specify the size of our launch.
But why do we use a three-dimensional value when we oh-so-clearly stated that
our launch is a two-dimensional grid?
Frankly, we do this because a three-dimensional, dim3 value is what the CUDA
runtime expects. Although a three-dimensional launch grid is not currently
supported, the CUDA runtime still expects a dim3 variable where the last compo-
nent equals 1. When we initialize it with only two values, as we do in the state-
ment dim3 grid(DIM,DIM), the CUDA runtime automatically fills the third
dimension with the value 1, so everything here will work as expected. Although
it’s possible that NVIDIA will support a three-dimensional grid in the future, for
now we’ll just play nicely with the kernel launch API because when coders and
APIs fight, the API always wins.
16. C
52
We then pass our dim3 variable grid to the CUDA runtime in this line:
kernel<<<grid,1>>>( dev _ bitmap );
Finally, a consequence of the results residing on the device is that after executing
kernel(), we have to copy the results back to the host. As we learned in
previous chapters, we accomplish this with a call to cudaMemcpy(), specifying
the direction cudaMemcpyDeviceToHost as the last argument.
HANDLE_ERROR( cudaMemcpy( bitmap.get_ptr(),
dev_bitmap,
bitmap.image_size(),
cudaMemcpyDeviceToHost ) );
One of the last wrinkles in the difference of implementation comes in the imple-
mentation of kernel().
__global__ void kernel( unsigned char *ptr ) {
// map from threadIdx/BlockIdx to pixel position
int x = blockIdx.x;
int y = blockIdx.y;
int offset = x + y * gridDim.x;
// now calculate the value at that position
int juliaValue = julia( x, y );
ptr[offset*4 + 0] = 255 * juliaValue;
ptr[offset*4 + 1] = 0;
ptr[offset*4 + 2] = 0;
ptr[offset*4 + 3] = 255;
}
First, we need kernel() to be declared as a __global__ function so it runs
on the device but can be called from the host. Unlike the CPU version, we no
longer need nested for() loops to generate the pixel indices that get passed
17. g
53
rogramming
to julia(). As with the vector addition example, the CUDA runtime generates
these indices for us in the variable blockIdx. This works because we declared
our grid of blocks to have the same dimensions as our image, so we get one block
for each pair of integers (x,y) between (0,0) and (DIM-1, DIM-1).
Next, the only additional information we need is a linear offset into our output
buffer, ptr. This gets computed using another built-in variable, gridDim. This
variable is a constant across all blocks and simply holds the dimensions of the
grid that was launched. In this example, it will always be the value (DIM, DIM).
So, multiplying the row index by the grid width and adding the column index will
give us a unique index into ptr that ranges from 0 to (DIM*DIM-1).
int offset = x + y * gridDim.x;
Finally, we examine the actual code that determines whether a point is in or out
of the Julia Set. This code should look identical to the CPU version, continuing a
trend we have seen in many examples now.
__device__ int julia( int x, int y ) {
const float scale = 1.5;
float jx = scale * (float)(DIM/2 - x)/(DIM/2);
float jy = scale * (float)(DIM/2 - y)/(DIM/2);
cuComplex c(-0.8, 0.156);
cuComplex a(jx, jy);
int i = 0;
for (i=0; i<200; i++) {
a = a * a + c;
if (a.magnitude2() > 1000)
return 0;
}
return 1;
}
18. C
54
Again, we define a cuComplex structure that defines a method for storing a
complex number with single-precision floating-point components. The structure
also defines addition and multiplication operators as well as a function to return
the magnitude of the complex value.
struct cuComplex {
float r;
float i;
cuComplex( float a, float b ) : r(a), i(b) {}
__device__ float magnitude2( void ) {
return r * r + i * i;
}
__device__ cuComplex operator*(const cuComplex& a) {
return cuComplex(r*a.r - i*a.i, i*a.r + r*a.i);
}
__device__ cuComplex operator+(const cuComplex& a) {
return cuComplex(r+a.r, i+a.i);
}
};
Notice that we use the same language constructs in CUDA C that we use in our
CPU version. The one difference is the qualifier __device__, which indicates
that this code will run on a GPU and not on the host. Recall that because these
functions are declared as __device__ functions, they will be callable only from
other __device__ functions or from __global__ functions.
Since we’ve interrupted the code with commentary so frequently, here is the
entire source listing from start to finish:
#include "../common/book.h"
#include "../common/cpu_bitmap.h"
#define DIM 1000
19. g
55
rogramming
struct cuComplex {
float r;
float i;
cuComplex( float a, float b ) : r(a), i(b) {}
__device__ float magnitude2( void ) {
return r * r + i * i;
}
__device__ cuComplex operator*(const cuComplex& a) {
return cuComplex(r*a.r - i*a.i, i*a.r + r*a.i);
}
__device__ cuComplex operator+(const cuComplex& a) {
return cuComplex(r+a.r, i+a.i);
}
};
__device__ int julia( int x, int y ) {
const float scale = 1.5;
float jx = scale * (float)(DIM/2 - x)/(DIM/2);
float jy = scale * (float)(DIM/2 - y)/(DIM/2);
cuComplex c(-0.8, 0.156);
cuComplex a(jx, jy);
int i = 0;
for (i=0; i<200; i++) {
a = a * a + c;
if (a.magnitude2() > 1000)
return 0;
}
return 1;
}
20. C
56
__global__ void kernel( unsigned char *ptr ) {
// map from threadIdx/BlockIdx to pixel position
int x = blockIdx.x;
int y = blockIdx.y;
int offset = x + y * gridDim.x;
// now calculate the value at that position
int juliaValue = julia( x, y );
ptr[offset*4 + 0] = 255 * juliaValue;
ptr[offset*4 + 1] = 0;
ptr[offset*4 + 2] = 0;
ptr[offset*4 + 3] = 255;
}
int main( void ) {
CPUBitmap bitmap( DIM, DIM );
unsigned char *dev_bitmap;
HANDLE_ERROR( cudaMalloc( (void**)&dev_bitmap,
bitmap.image_size() ) );
dim3 grid(DIM,DIM);
kernel<<<grid,1>>>( dev_bitmap );
HANDLE_ERROR( cudaMemcpy( bitmap.get_ptr(), dev_bitmap,
bitmap.image_size(),
cudaMemcpyDeviceToHost ) );
bitmap.display_and_exit();
HANDLE_ERROR( cudaFree( dev_bitmap ) );
}
When you run the application, you should see an animating visualization of the
Julia Set. To convince you that it has earned the title “A Fun Example,” Figure 4.2
shows a screenshot taken from this application.
21. eview
57
eview
Figure 4.2 A screenshot from the GPU Julia Set application
Chapter Review
Congratulations, you can now write, compile, and run massively parallel code
on a graphics processor! You should go brag to your friends. And if they are still
under the misconception that GPU computing is exotic and difficult to master,
they will be most impressed. The ease with which you accomplished it will be
our secret. If they’re people you trust with your secrets, suggest that they buy the
book, too.
We have so far looked at how to instruct the CUDA runtime to execute multiple
copies of our program in parallel on what we called blocks. We called the collec-
tion of blocks we launch on the GPU a grid. As the name might imply, a grid can
be either a one- or two-dimensional collection of blocks. Each copy of the kernel
can determine which block it is executing with the built-in variable blockIdx.
Likewise, it can determine the size of the grid by using the built-in variable
gridDim. Both of these built-in variables proved useful within our kernel to
calculate the data index for which each block is responsible.