Running OpenACC Programs on NVIDIA & AMD GPUs

335 views

Published on

Join us for this webinar onRunning OpenACC Programs on NVIDIA & AMD GPUs ,Date: Thursday December 12, 2013,2:00 PM ET / 11:00 AM PT / 18:00 GMT (Duration: 1 hour)

Published in: Education
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
335
On SlideShare
0
From Embeds
0
Number of Embeds
3
Actions
Shares
0
Downloads
0
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Running OpenACC Programs on NVIDIA & AMD GPUs

  1. 1. Running OpenACC Programs on NVIDIA & AMD GPUs Date - Thursday December 12, 2013 Time - 2:00 PM ET / 11:00 AM PT / 18:00 GMT (Duration: 1 hour) Mainstream supercomputers and clusters commonly use a 64-bit x86 host processor. They now often include one or more accelerators per node. The most common accelerators today are GPUs. These compute accelerators exploit a high degree of parallelism in an application or algorithm to maximize performance and power efficiency. There are several challenges to effective and productive use of accelerators. These include managing data placement and movement, and expressing parallelism in a form that can be mapped efficiently onto the target hardware. Another challenge, and our focus in this presentation, is designing algorithms and data structures that are of benefit not only for the accelerators of today, but on future accelerator-based systems as well, without significant re-design or re-tuning. We describe the latest features of current and near-future accelerators. We summarize the current state of programming such systems, including CUDA, OpenCL, OpenACC, and OpenMP. The bulk of the presentation will focus on OpenACC using the PGI Accelerator compilers. We will explore some example applications using currently available GPU accelerators from NVIDIA and AMD. We will explore how to determine when a region of code is suitable for an accelerator; managing data allocation and traffic between host and accelerator memories; appropriate data structures for use on accelerators; building programs for NVIDIA and AMD GPUs; and finally, building a single program that will run on either GPU, or on the host itself. Speaker - Michael Wolfe Michael Wolfe is a compiler engineer at The Portland Group where he is involved in deep compiler analysis and optimizations. He has published one textbook, "High Performance Compilers for Parallel Computing", a monograph, "Optimizing Supercompilers for Supercomputers", and many technical papers and articles. Webinar URL - http://www.computer.org/portal/web/webinars/Register-for-a-Webinar Sponsored by - About IEEE Computer Society IEEE Computer Society is the world's leading computing membership organization and the trusted information and careerdevelopment source for a global workforce of technology leaders including: professors, researchers, software engineers, IT professionals, employers, and students. The unmatched source for technology information, inspiration, and collaboration, the IEEE Computer Society is the source that computing professionals trust to provide high-quality, state-of-the-art information on an ondemand basis. The Computer Society provides a wide range of forums for top minds to come together, including technical conferences, publications, and a comprehensive digital library, unique training webinars, professional training, and the TechLeader Training Partner Program to help organizations increase their staff's technical knowledge and expertise, as well as the personalized information tool myComputer. To find out more about the community for technology leaders, visit http://www.computer.org. REGISTER NOW ©2013 IEEE Computer Society

×