Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Achieving Improved Performance In Multi-threaded Programming With GPU Computing


Published on

Published in: Technology
  • Be the first to comment

  • Be the first to like this

Achieving Improved Performance In Multi-threaded Programming With GPU Computing

  1. 1. Achieving Improved Performance In Multi-threaded Programming With GPU Computing CSE 4120 : Technical Writing & Seminar MD. Mesbah Uddin Khan [] Dated, July 13, 2011
  2. 2. Things we need to know <ul><li>CPU </li></ul><ul><li>GPU </li></ul><ul><li>Threads </li></ul><ul><li>Multi-thread Programming </li></ul><ul><li>Parallel Programming </li></ul><ul><li>OpenCL </li></ul><ul><li>GPU Computing </li></ul>
  3. 3. Introduction <ul><li>Using graphics hardware to enhance CPU based standard desktop applications is a question not only of programming models but also of critical optimizations that are required to achieve true performance improvements. </li></ul>
  4. 4. Hardware Trends <ul><li>Two major hardware trends make parallel programming a crucial issue for all software engineers today. </li></ul><ul><ul><li>The rise of many-core CPU architectures </li></ul></ul><ul><ul><li>The inclusion of powerful graphics processing units(GPUs). </li></ul></ul>
  5. 5. Central Processing Unit (CPU) <ul><li>CPU support parallel programming </li></ul><ul><li>Assigns threads to different tasks and coordinates their activities. </li></ul><ul><li>Newer programming models also consider the non-uniform memory architecture (NUMA) of modern desktop systems </li></ul><ul><li>They rely on the underlying concept of parallel threads </li></ul><ul><li>CPU hardware is optimized for such coarse-grained, task-parallel programming with synchronized shared memory. </li></ul>
  6. 6. Graphics Processing Unit (GPU) <ul><li>GPUs are mainly designed for fine-grained, data-parallel computation. </li></ul><ul><li>Graphics processing is an embarrassingly parallel problem. </li></ul><ul><li>GPU hardware is optimized for heavy loads. </li></ul><ul><li>It aims at combining a maximum number of simple parallel processing elements, each having only a small amount of local memory. </li></ul><ul><li>For example, the Nvidia Geforce GTX 480 graphics card supports up to 1,536 GPU threads on each of its 15 compute units. So, at full operational capacity, it can run 23,040 parallel execution streams. </li></ul>
  7. 7. From CPU to GPU <ul><li>Applications running on a computer can access GPU resources with the help of a control API implemented in user-mode libraries and the graphics card driver. </li></ul><ul><li>Leading GPU-interested companies defined Open Computing Language (OpenCL), a vendor neutral way of accessing computable resources. </li></ul>
  8. 8. Example Application – Sudoku (1/2) <ul><li>A Sudoku field typically consists of 3x3 subfields with each having 3x3 places. Three facts make this problem a representative example of algorithms appropriate for GPU execution: </li></ul><ul><ul><li>Data validation is the primary application task, </li></ul></ul><ul><ul><li>The computational effort grows with the game fields size (i.e. the problem size), </li></ul></ul><ul><ul><li>The workload belongs to the GPU-friendly class of embarrassingly parallel problems that have only a very small serial execution portion. </li></ul></ul>
  9. 9. Example Application – Sudoku (2/2) <ul><li>Fig: Execution time of the Sudoku validation on different compute devices </li></ul><ul><li>(a) problem size of 10,000 to 50,000 possible Sudoku places and </li></ul><ul><li>(b) problem size of 100,000 to 700,000 Sudoku places. </li></ul>
  10. 10. Best CPU-GPU Practices <ul><li>To push the performance of GPU-enabled desktop applications even further requires fine-grained tuning of data placement and parallel activities on the GPU card. </li></ul><ul><ul><li>Algorithm Design </li></ul></ul><ul><ul><li>Memory Transfer </li></ul></ul><ul><ul><li>Control Flow </li></ul></ul><ul><ul><li>Memory Types </li></ul></ul><ul><ul><li>Memory Access </li></ul></ul><ul><ul><li>Sizing </li></ul></ul><ul><ul><li>Instructions </li></ul></ul><ul><ul><li>Precision </li></ul></ul>
  11. 11. Developer Support <ul><li>Vendors (like Nvidia and AMD) offer software development kits with different C compilers for Windows and Linux based systems. </li></ul><ul><li>Developers can also utilize special libraries, such as AMD’s Core Math Library, Nvidia’s libraries for basic linear algebra subroutines and fast Fourier transforms, etc. </li></ul><ul><li>Nvidia and AMD also provide big knowledge bases with tutorials, examples, articles, use cases, and developer forums on their websites. </li></ul>
  12. 12. Concluding Remarks <ul><li>The GPU market continues to evolve quickly </li></ul><ul><li>Nvidia has already started distinguishing between GPU computing for normal graphic cards and as a sole-purpose activity on processors such as its Tesla series. </li></ul><ul><li>Higher-level languages like Java and C-Sharp can be benefited from GPU computing by using GPU-based libraries. </li></ul>
  13. 13. References <ul><li>“ Joint Forces: From Multithreaded Programming to GPU Computing,” IEEE Software, Jan/Feb 2011 By Frank Feinbube, Peter Troger and Andreas Polze. </li></ul><ul><li>Pseudorandomness Advanced Micro Devices, ATI Stream Computing OpenCL Programming Guide, June 2010. </li></ul><ul><li>Nvidia OpenCL Best Practices Guide, Version 2.3, August 2009. </li></ul>
  14. 14. Thank you all… 