Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.
Upcoming SlideShare
What to Upload to SlideShare
Next
Download to read offline and view in fullscreen.

Share

PyTorch - an ecosystem for deep learning with Soumith Chintala (Facebook AI)

Download to read offline

Keynote from Soumith Chintala at Spark + AI Summit Europe

PyTorch - an ecosystem for deep learning with Soumith Chintala (Facebook AI)

  1. 1. Soumith Chintala Facebook AI an ecosystem for deep learning
  2. 2. What is PyTorch? Ndarray library with GPU support automatic differentiation engine gradient based optimization package Deep Learning Reinforcement Learning Numpy-alternative Utilities (data loading, etc.)
  3. 3. ndarray library •np.ndarray <-> torch.Tensor •200+ operations, similar to numpy •very fast acceleration on NVIDIA GPUs
  4. 4. ndarray library Numpy PyTorch
  5. 5. ndarray / Tensor library
  6. 6. ndarray / Tensor library
  7. 7. ndarray / Tensor library
  8. 8. ndarray / Tensor library
  9. 9. NumPy bridge
  10. 10. NumPy bridge Zero memory-copy very efficient
  11. 11. NumPy bridge
  12. 12. NumPy bridge
  13. 13. Seamless GPU Tensors
  14. 14. Neural Networks
  15. 15. Neural Networks
  16. 16. Neural Networks
  17. 17. Optimization package SGD, Adagrad, RMSProp, LBFGS, etc.
  18. 18. Distributed PyTorch • MPI style distributed communication • Broadcast Tensors to other nodes • Reduce Tensors among nodes - for example: sum gradients among all nodes
  19. 19. Distributed Data Parallel for epoch in range(max_epochs): for data, target in enumerate(training_data): output = model(data) loss = F.nll_loss(output, target) loss.backward() optimizer.step()
  20. 20. Distributed Data Parallel for epoch in range(max_epochs): for data, target in enumerate(training_data): output = model(data) model = nn.DistributedDataParallel(model) loss = F.nll_loss(output, target) loss.backward() optimizer.step()
  21. 21. P Y T O R C H 1 . 0 Distributed Training Performance – ResNet101 0 1 2 3 4 5 6 7 8 9 1 Node (8 GPUs) 2 Nodes (16 GPUs) 4 Nodes (32 GPUs) 8 Nodes (64 GPUs) Speedups ResNet-101 on NVIDIA V100 GPUs 100 Gbit TCP 4 x 100Gbit Infiniband Ideal Speedup
  22. 22. Use via DataBricks MLFlow •mlflow.pytorch - saves and loads models •More resources: - https://docs.databricks.com/spark/latest/mllib/mlflow-pytorch.html - https://www.mlflow.org/docs/latest/models.html
  23. 23. Ecosystem • Use the entire Python ecosystem at your will
  24. 24. Ecosystem • Use the entire Python ecosystem at your will • Including SciPy, Scikit-Learn, etc.
  25. 25. Ecosystem • Use the entire Python ecosystem at your will • Including SciPy, Scikit-Learn, etc.
  26. 26. Ecosystem • A shared model-zoo:
  27. 27. Ecosystem •Probabilistic Programming http://pyro.ai/ github.com/probtorch/probtorch
  28. 28. Ecosystem •Gaussian Processes https://github.com/cornellius-gp/gpytorch
  29. 29. Ecosystem •Machine Translation https://github.com/OpenNMT/OpenNMT-py https://github.com/facebookresearch/fairseq-py
  30. 30. Ecosystem •AllenNLP http://allennlp.org/
  31. 31. Ecosystem •AllenNLP • State-of-the-art models for comprehension, Q&A, various other NLP tasks http://allennlp.org/
  32. 32. Ecosystem •AllenNLP • State-of-the-art models for comprehension, Q&A, various other NLP tasks http://allennlp.org/
  33. 33. Ecosystem •AllenNLP • State-of-the-art models for comprehension, Q&A, various other NLP tasks http://allennlp.org/
  34. 34. fast.ai 1.0 • High-level library on PyTorch: http://docs.fast.ai
  35. 35. fast.ai 1.0 • High-level library on PyTorch: http://docs.fast.ai • Built by Jeremy Howard, Rachel Thomas and many community members
  36. 36. fast.ai 1.0 • High-level library on PyTorch: http://docs.fast.ai • Built by Jeremy Howard, Rachel Thomas and many community members • an online course accompanies the library
  37. 37. fast.ai 1.0 • High-level library on PyTorch: http://docs.fast.ai • Built by Jeremy Howard, Rachel Thomas and many community members • an online course accompanies the library • Read more at http://www.fast.ai/2018/10/02/fastai-ai/
  38. 38. fast.ai 1.0 • state-of-the-art models in few lines
  39. 39. fast.ai 1.0 • state-of-the-art models in few lines • fine-tune on your own data
  40. 40. fast.ai 1.0 • state-of-the-art models in few lines • fine-tune on your own data data = data_from_imagefolder(Path('data/dogscats'), ds_tfms=get_transforms(), tfms=imagenet_norm, size=224) learn = ConvLearner(data, tvm.resnet34, metrics=accuracy) learn.fit_one_cycle(6) learn.unfreeze() learn.fit_one_cycle(4, slice(1e-5,3e-4)) Near State-of-the-art Image Classifiers
  41. 41. fast.ai 1.0 Models and Transforms for Tabular Data • state-of-the-art models in few lines • fine-tune on your own data
  42. 42. https://pytorch.org With ❤ from
  • bhakthan

    Oct. 13, 2018

Keynote from Soumith Chintala at Spark + AI Summit Europe

Views

Total views

1,244

On Slideshare

0

From embeds

0

Number of embeds

2

Actions

Downloads

51

Shares

0

Comments

0

Likes

1

×