Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

OpenSUSE Conference 2019 - Building GPU aware containers

52 views

Published on

HOW TO BUILD A CONTAINER
BASED ON OPENSUSE-LEAP GPU ENABLED TO INSTALL AND RUN MACHINE LEARNING
FRAMEWORKS LEVERAGING JUPITER NOTEBOOKS.

Published in: Technology
  • Login to see the comments

  • Be the first to like this

OpenSUSE Conference 2019 - Building GPU aware containers

  1. 1. SUSE Developer program for Data Scientists Developers Program Architect Marco Varlese marco.varlese@suse.com Sr. Product Manager Accelerators & Artificial Intelligence Alessandro Festa alessandro.festa@suse.com @bringyourownid
  2. 2. In this session… You will learn… Why a Dev Program for Data Scientist… Containers, GPU’s, Tricks and… …how about some juggling?
  3. 3. So what about Dev Program…
  4. 4. Why a GPU aware Container • Technical needs: • Machine Learning and Deep Learning need high computational power • It’s not only GPU but market is there right now (see next slide) = 90% of the users/customers • Machine Learning in a container are the way to go: are simples to use for a non-technical person, easy to deploy, easy to “transport” (from on-prem to cloud and reverse) • Challenges: • NVIDIA drivers are no open source so cannot be shipped with Leap/Tubleweed (no OBS) so we (as community) need to find a solutions to make users life easier • Nvidia-docker from NVIDIA CUDA are required • Docker images for Machine Learning frameworks are HUGE (over 3 GB) Wait wait…. What you are talking about? NVIDIA what?
  5. 5. You are here
  6. 6. Mandatory Requirements “Make sure you have installed the NVIDIA driver and a supported version of Docker for your distribution” GNU/Linux x86_64 with kernel version > 3.10 Docker >= 1.12 NVIDIA GPU with Architecture > Fermi (2.1) NVIDIA drivers ~= 361.93 (untested on older versions)
  7. 7. CUDA toolkit version Driver version GPU architecture 6.5 >= 340.29 >= 2.0 (Fermi) 7.0 >= 346.46 >= 2.0 (Fermi) 7.5 >= 352.39 >= 2.0 (Fermi) 8.0 == 361.93 or >= 375.51 == 6.0 (P100) 8.0 >= 367.48 >= 2.0 (Fermi) 9.0 >= 384.81 >= 3.0 (Kepler) 9.1 >= 387.26 >= 3.0 (Kepler) 9.2 >= 396.26 >= 3.0 (Kepler) 10.0 >= 384.111, < 385.00 Tesla GPUs 10.0 >= 410.48 >= 3.0 (Kepler) 10.1 >= 384.111, < 385.00 Tesla GPUs 10.1 >=410.72, < 411.00 Tesla GPUs 10.1 >= 418.39 >= 3.0 (Kepler)
  8. 8. Where to start• NVIDIA container matrix: • https://docs.nvidia.com/deeplearning/dgx/support-matrix/index.html#framework-matrix-2019
  9. 9. Where to start • NVIDIA gitlab : https://gitlab.com/nvidia
  10. 10. Where to start NVIDIA on Docker Hub: https://hub.docker.com/r/nvidia/cuda/ CUDA images come in three flavors: • base: starting from CUDA 9.0, contains the bare minimum (libcudart) to deploy a pre-built CUDA application.Use this image if you want to manually select which CUDA packages you want to install. • runtime: extends the base image by adding all the shared libraries from the CUDA toolkit.Use this image if you have a pre-built application using multiple CUDA libraries. • devel: extends the runtime image by adding the compiler toolchain, the debugging tools, the headers and the static libraries.Use this image to compile a CUDA application from sources.
  11. 11. Challenges (Resume) • HOST require nvidia-docker V2 installed (github pull waiting to be merged - https://github.com/NVIDIA/nvidia-docker/pull/790) : we are working on IT (Thanks Darren Davis our TAM to push on NVIDIA!) • CudNN and CUDA require license acceptance by user – cannot be easily delivered as SUSE package – Partner Hub to the rescue ! And in containers may be installed silently using an explicit variable (i.e.: -e ACCEPT_EULA=Y) • Some dependencies are missing in SLE but not in openSUSE when install CUDA directly from the NVIDA Repo – as alternative we may use the CUDA script.
  12. 12. Both packages seems to be optional to me. Do we need samples? - Maybe Do we need X11 driver in a container? – Would say it depends…. Both are published as openSUSE packages
  13. 13. Result NVIDIA variables (mandatory) Only needed if run the NVIDIA Cuda script (optional ?) Actual install steps (these are for Tensorflow base)
  14. 14. DEMO TIME
  15. 15. But the containers is not enough… You’re a Data Scientist not a SysAdmin/DevOps
  16. 16. AI Use Cases (for openSUSE) Data Scientist Machine Learning Engineer • Run an experiment with different coefficients and summarize the results • Work “local” first • Create “template” and need to re-apply to production ready environment • Write Code based on Dataset samples • Work either “local” or “remote” connected • Need to re-test (QA) code on a different environment Can Customers Do It Alone?
  17. 17. Kubic/openSUSE Leap openSUSE Leap + Containers/VM Deployment openSUSE Leap Kubic A simple Data Scientist playground Launch Notebook Choose Use Case/ML Framework Use playground Data Scientist choose but do not see complexity On Prem Cloud
  18. 18. DEMO TIME
  19. 19. So to recap…and to learn something new…

×