Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Apache MXNet AI

38 views

Published on

This presentation gives an overview of the Apache MXNet AI project. It explains Apache MXNet AI in terms of it's architecture, eco system, languages and the generic problems that the architecture attempts to solve.

Links for further information and connecting

http://www.amazon.com/Michael-Frampton/e/B00NIQDOOM/

https://nz.linkedin.com/pub/mike-frampton/20/630/385

https://open-source-systems.blogspot.com/

Published in: Technology
  • Be the first to comment

  • Be the first to like this

Apache MXNet AI

  1. 1. What Is Apache MXNet ? ● A deep learning framework ● Open source Apache 2.0 license ● Supports distributed gpu cluster training/deployment ● Of deep neural networks ● It supports a variety of language bindings ● Supports hybridize for increased speed/optimization ● Supports near linear scaling on gpu / host clusters ● Provides support for the Horovod framework
  2. 2. MXNet Language Bindings ● MXNet has a Python based API ● MXNet also supports the following language bindings – Scala – Julia – Clojure – Java – C++ – R – Perl
  3. 3. MXNet Related Terms Horovod MMS DGL ONNX Hyperparameter D2l.ai KVStore DMLC A distributed deep learning framework from Uber MXNet Model Server Deep Graph Library Open Neural Network Exchange A parameter whose value is used to control the learning process A jupyter notebook based deep learning book for Mxnet ++ Key-value store interface used by MXNet Distributed (Deep) Machine Learning Community - GitHub
  4. 4. MXNet Eco System Coach RL Deep Graph GluonFR InsightFace Keras-MXNet MXBoard MXFusion MXNet Model Optuna Sockeye A Python reinforcement learning framework DGL is a Python pkg for deep learning on graphs A community driven toolkit for face detection and recognition A face detection and recognition repository A back end of high level API Keras Logging API's for TensorBoard visualisation A modular deep probabilistic programming library A flexible tool for serving models exported from Mxnet A hyperparameter optimization framework A sequence to sequence framework for neural translation
  5. 5. MXNet Eco System TensorLY TVM Xfer GluonCV GluonNLP GluonTS A high level API for tensor methods An open deep learning stack for GPU's, CPU's etc A library for the transfer of knowledge in deep nets A computer vision toolkit with a rich model zoo Deep learning models for natural language processing A toolkit for probabilistic time series modelling
  6. 6. MXNet User Community
  7. 7. MXNet Architecture
  8. 8. MXNet Architecture ● Runtime Dependency Engine – Schedules and executes the operations – According to their read/write dependency ● Storage Allocator – Efficiently allocates and recycles memory blocks – On host (CPU) and devices (GPUs) ● Resource Manager – Manages global resources, such as – The random number generator and temporal space ● NDArray – Dynamic, asynchronous n-dimensional arrays
  9. 9. MXNet Architecture ● Symbolic Execution – Static symbolic graph executor, which provides – Efficient symbolic graph execution and optimization ● Operator – Operators that define static forward/gradient calc (backprop) ● SimpleOp – Operators that extend NDArray operators and – Symbolic operators in a unified fashion ● Symbol Construction – Symbolic construction, which provides a way to construct – A computation graph (net configuration)
  10. 10. MXNet Architecture ● KVStore – Key-value store interface for efficient parameter synchronization ● Data Loading(IO) – Efficient distributed data loading and augmentation
  11. 11. MXNet Data Loading ● For large data sets data loading is optimized in MXNet ● Data format – Uses dmlc-core’s binary recordIO implementation ● Data Loading – Reduced IO cost by utilizing the threaded iterator – Provided by dmlc-core ● Interface design – Write MXNet data iterators in just a few lines of Python
  12. 12. MXNet Dependency Engine ● Helps to parallelize computation across devices ● Helps to synchronize computation when – We introduce multi-threading ● A run time dependency schedule graph is created ● The graph is then used to – Optimize processing – Optimize memory use – Aid parallelism when using ● GPU / CPU clusters ● For deep learning memory use – Usage during training > during prediction
  13. 13. MXNet Forward Vs Backward Graph
  14. 14. Available Books ● See “Big Data Made Easy” – Apress Jan 2015 ● See “Mastering Apache Spark” – Packt Oct 2015 ● See “Complete Guide to Open Source Big Data Stack – “Apress Jan 2018” ● Find the author on Amazon – www.amazon.com/Michael-Frampton/e/B00NIQDOOM/ ● Connect on LinkedIn – www.linkedin.com/in/mike-frampton-38563020
  15. 15. Connect ● Feel free to connect on LinkedIn – www.linkedin.com/in/mike-frampton-38563020 ● See my open source blog at – open-source-systems.blogspot.com/ ● I am always interested in – New technology – Opportunities – Technology based issues – Big data integration

×