Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Using Docker for GPU-accelerated Applications by Felix Abecassis and Jonathan Calmeis

801 views

Published on

In addition of being used for visualization, the highly parallel architecture of GPUs also make them a natural fit for accelerating data-parallel and throughput oriented computations such as machine learning or numerical simulations. When GPUs applications are deployed inside data centers they suffer from the same packaging issues as CPU applications, aggravated by a strong need to get reproducible performance results.
The Docker ecosystem is mostly CPU-centric and aims to be hardware-agnostic. This is not the case for GPUs applications since specialized hardware and a specific kernel device driver are now required. We will show how we reconciled those seemingly opposed requirements to enable containerization and execution of GPU applications with Docker.

Published in: Technology
  • Be the first to comment

Using Docker for GPU-accelerated Applications by Felix Abecassis and Jonathan Calmeis

  1. 1. Using Docker for GPU accelerated applications Felix Abecassis Systems Software Engineer, NVIDIA Systems Software Engineer, NVIDIA Jonathan Calmels
  2. 2. GPU Computing CUDA Ecosystem Applications Agenda NVIDIA Docker Challenges Our solution Demos GPU isolation Machine Learning Remote deployment
  3. 3. GPU Computing nvidia.com/object/gpu-accelerated-computing.html
  4. 4. Heterogeneous Computing CPU Optimized for Serial Tasks GPU Optimized for Parallel Tasks
  5. 5. CUDA C++ Programming // Vector sum in C void vector_add(int n, const float* a, const float* b, float* c) { for (int idx = 0; idx < n; ++idx) c[idx] = a[idx] + b[idx]; } // Vector sum in CUDA __global__ void vector_add(int n, const float* a, const float* b, float* c) { int idx = blockIdx.x * blockDim.x + threadIdx.x; if (idx < n) c[idx] = a[idx] + b[idx]; }
  6. 6. Ecosystem Libraries Programming Languages Compiler Directives AmgX cuBLAS / x86
  7. 7. NVIDIA SDKs
  8. 8. Applications: Deep Learning INTERNET & CLOUD Image Classification Speech Recognition Language Translation Language Processing Sentiment Analysis Recommendation MEDIA & ENTERTAINMENT Video Captioning Video Search Real Time Translation AUTONOMOUS MACHINES Pedestrian Detection Lane Tracking Recognize Traffic Sign SECURITY & DEFENSE Face Detection Video Surveillance Satellite Imagery MEDICINE & BIOLOGY Cancer Cell Detection Diabetic Grading Drug Discovery
  9. 9. GPU-Accelerated Deep Learning WATSON THEANO MATCONVNET TENSORFLOW CNTK TORCH CAFFE CHAINER
  10. 10. NVIDIA Docker github.com/NVIDIA/nvidia-docker
  11. 11. Challenges: a typical cluster
  12. 12. Packaging driver files? FROM ubuntu:14.04 RUN apt-get update && apt-get install --no-install-recommends -y gcc make libc-dev wget RUN wget http://us.download.nvidia.com/XFree86/Linux-x86_64/361.42/NVIDIA-Linux-x86_64-361.42.run RUN sh NVIDIA-Linux-x86_64-361.42.run --silent --no-kernel-module Never install the driver in the Dockerfile, not portable!
  13. 13. Bringing GPU support to Docker
  14. 14. Internals
  15. 15. DockerHub images
  16. 16. GPU Applications Workflow Container Based Applications GPU-Accelerated data center Research/Develop DeployTest/Package Video Transcoding Image Processing Deep Learning HPC Visualization
  17. 17. Demos Interactive, ask questions!
  18. 18. Thank you!

×