Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

CUDA DLI Training Courses at GTC 2019

8,027 views

Published on

Check out these DLI training courses at GTC 2019 designed for developers, data scientists & researchers looking to solve the world’s most challenging problems with accelerated computing.

Published in: Technology
  • Here Meet my pussy here SEX25.CLUB
       Reply 
    Are you sure you want to  Yes  No
    Your message goes here

CUDA DLI Training Courses at GTC 2019

  1. 1. CUDA DLI TRAINING SESSIONS AT GTC 2019
  2. 2. EXPECTED TO BE THE BIGGEST YET, GTC FEATURES SESSIONS AND DLI TRAINING ON THE MOST IMPORTANT TOPICS IN COMPUTING TODAY
  3. 3. WHY DLI HANDS-ON TRAINING? ● LEARN HOW TO BUILD APPS ACROSS INDUSTRY SEGMENTS ● GET HANDS-ON EXPERIENCE USING INDUSTRY-STANDARD SOFTWARE, TOOLS & FRAMEWORKS ● GAIN EXPERTISE THROUGH CONTENT DESIGNED WITH INDUSTRY LEADERS
  4. 4. FUNDAMENTALS OF ACCELERATED COMPUTING WITH CUDA PYTHON This course explores how to use Numba—the just-in- time, type-specializing Python function compiler— to accelerate Python programs to run on massively parallel NVIDIA GPUs. You’ll learn how to: ● Use Numba to compile CUDA kernels from NumPy universal functions (ufuncs) ● Use Numba to create and launch custom CUDA kernels ● Apply key GPU memory management techniques ADD TO MY SCHEDULE
  5. 5. ADD TO MY SCHEDULE The CUDA computing platform enables acceleration of CPU-only applications to run on the world's fastest massively parallel GPUs. Learn how to accelerate C/C++ applications by: ● Exposing the parallelization of CPU-only applications, and refactoring them to run in parallel on GPUs ● Successfully managing memory ● Utilizing CUDA parallel thread hierarchy to further increase performance ACCELERATING APPLICATIONS WITH CUDA C/C++
  6. 6. CUDA ON DRIVE AGX Explore how to write CUDA code and run it on DRIVE AGX. You'll learn about: ADD TO MY SCHEDULE ● Hardware architecture of DRIVE AGX ● Memory Management of iGPU and dGPU ● GPU acceleration for inferencing
  7. 7. ACCELERATING DATA SCIENCE WORKFLOWS WITH RAPIDS The open source RAPIDS project allows data scientists to GPU-accelerate their data science and data analytics applications from beginning to end, creating possibilities for drastic performance gains and techniques not available through traditional CPU-only workflows. Learn how to GPU-accelerate your data science applications by: ADD TO MY SCHEDULE ● Utilizing key RAPIDS libraries like cuDF & cuML ● Learning techniques and approaches to end-to- end data science ● Understanding key differences between CPU- driven and GPU-driven data science
  8. 8. DEBUGGING AND OPTIMIZING CUDA APPLICATIONS WITH NSIGHT PRODUCTS ON LINUX TRAINING Learn how NVIDIA tools can improve development productivity by narrowing down bugs and spotting areas of optimization in CUDA applications on a Linux x86_64 system. Through a set of exercises, you'll gain hands-on experience using NVIDIA's new Nsight Systems and Nsight Compute tools for debugging, narrowing down memory issues, and optimizing a CUDA application. ADD TO MY SCHEDULE
  9. 9. ACCELERATED DATA SCIENCE PIPELINE WITH RAPIDS ON AZURE Learn how to deploy RAPIDS machine learning jobs on NVIDIA's GPUs using Microsoft Azure and explore: ADD TO MY SCHEDULE ● Azure Portal Permits: a convenient way to perform functional experimentation with RAPIDS. ● Azure Machine Learning (AML) SDK: enables a batch experimentation mode and where the user can set ranges on different parameters to be run on a RAPIDS program, saving the results for later analysis
  10. 10. HIGH PERFORMANCE COMPUTING USING CONTAINERS Learn to build, deploy and run containers in an HPC environment. During this session, you will learn: the basics of building container images with Docker and Singularity, how to use HPC Container Maker (HPCCM) to make it easier to build container images for HPC applications, and how to use containers from the NGC with Singularity. ADD TO MY SCHEDULE
  11. 11. INTRODUCTION TO CUDA PYTHON WITH NUMBA Explore an introduction to Numba, a just-in-time function compiler that allows developers to utilize the CUDA platform in Python applications. You'll learn how to: ADD TO MY SCHEDULE ● Decorate Python functions to be compiled by Numba ● Use Numba to GPU accelerate NumPy ufuncs
  12. 12. CUDA PROGRAMMING IN PYTHON WITH NUMBA AND CUPY Combining Numba, an open source compiler that can translate Python functions for execution on the GPU, with the CuPy GPU array library, a nearly complete implementation of the NumPy API for CUDA, creates a high productivity GPU development environment. Learn the basics of using Numba with CuPy, techniques for automatically parallelizing custom Python functions on arrays, and how to create and launch CUDA kernels entirely from Python. ADD TO MY SCHEDULE
  13. 13. REGISTER TODAY FOR GTC AND EXPLORE THE FULL LIST OF CUDA TRAINING, TALKS & EXPERT SESSIONS LEARN MORE

×