Your SlideShare is downloading. ×
0
GPU Programming
GPU Programming
GPU Programming
GPU Programming
GPU Programming
GPU Programming
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

GPU Programming

769

Published on

Introduction to high performance computing with graphics cards

Introduction to high performance computing with graphics cards

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
769
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
17
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. CPU Architecture
    • good for serial programs
    • do many different things well
    • many transistors for purposes other than ALUs (eg. flow control and caching)
    • memory access is slow (1GB/s)
    • switching threads is slow
    Image from Alex Moore, “Introduction to Programming in CUDA”, http://astro.pas.rochester.edu/~aquillen/gpuworkshop.html
  • 2. GPU Architecture
    • many processors perform similar operations on a large data set in parallel (single-instruction multiple-data parallelism)
    • recent GPUs have around 30 multiprocessors, each containing 8 stream processors
    • GPUs devote most (80%) of their transistors to ALUs
    • fast memory (80GB/s)
    ALUs Control Cache
  • 3. Memory Hierarchy Image from Johan Seland, “CUDA Programming”, http://heim.ifi.uio.no/~knutm/geilo2008/seland.pdf
  • 4. Thread Hierarchy
    • a block of threads runs on a single stream processor
    • a grid of blocks makes up the entire set
    • each thread in a block can access the same shared memory
    • many more threads than processors
    Image from Johan Seland, “CUDA Programming”, http://heim.ifi.uio.no/~knutm/geilo2008/seland.pdf
  • 5. CUDA
    • a set of C extensions for running programs on a GPU
    • Windows, Linux, Mac…. Nvidia Cards only
    • http://www.nvidia.com/object/cuda_home.html
    • relatively easy to convert algorithms to CUDA, look at loops that do the same calculation on an entire array
    • - gives you direct access to the memory architecture
  • 6. Results Image from Kevin Dale, “A Graphics Hardware-Accelerated Real-Time Processing Pipeline for Radio Astronomy”, Presented at AstroGPU, Nov 2007. (for tasks relevant to the MWA Real Time System)

×