The aim of this seminar is to provide students with basic knowledge on developing applications for processors with massively parallel computing resources. In general, we refer to a processor as massively parallel if it has the ability to complete more than 64 arithmetic operations per clock cycle. Graphics processing units (GPUs) fall into this category, but other massively parallel architectures are emergent. Effectively programming these processors will require in-depth knowledge about parallel programming principles, as well as the parallelism models, communication models, memory hierarchy, and resource limitations of these processors. We will also overview some tools that reduce the initial difficulties of CUDA programming.
Clipping is a handy way to collect important slides you want to go back to later.