Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

3

Share

Download to read offline

DL4J at Workday Meetup

Download to read offline

David Kale and Ruben Fizsel from Skymind talk about deep learning for the JVM and enterprise using deeplearning4j (DL4J). Deep learning (nouveau neural nets) have sparked a renaissance in empirical machine learning with breakthroughs in computer vision, speech recognition, and natural language processing. However, many popular deep learning frameworks are targeted to researchers and poorly suited to enterprise settings that use Java-centric big data ecosystems. DL4J bridges the gap, bringing high performance numerical linear algebra libraries and state-of-the-art deep learning functionality to the JVM.

Related Books

Free with a 30 day trial from Scribd

See all

Related Audiobooks

Free with a 30 day trial from Scribd

See all

DL4J at Workday Meetup

  1. 1. DL4J: Deep Learning for the JVM and Enterprise David C. Kale, Ruben Fiszel Skymind Workday Data Science Meetup August 10, 2016
  2. 2. Who are we? • Deeplearning4j: open source deep learning on the JVM • Skymind: deep learning for enterprise • fighting good fight vs. python deep learning mafia • founded by Adam Gibson • CEO Chris Nicholson • Dave Kale: developer, Skymind: Scala API • also PhD student, USC • research: deep learning for healthcare • Ruben Fiszel: intern, Skymind: reinforcement learning (RL4J) • also MS student, EPFL
  3. 3. Outline • Overview of deep learning • Tour of DL4J • Scaling up DL4J • DL4J versus… • Preview of DL4J Scala API • Preview of RL4J
  4. 4. What is Deep Learning? • Compositions of (deterministic) differentiable functions, some parameterized • compute transformations of data • eventually emit output • can have multiple paths • architecture is end-to-end differentiable w.r.t. parameters (w’s) • training: • define targets, loss function • apply gradient methods: use chain rule to get component-wise updates x1 f1(x1;w1) z1 f2(z1) z2 f3(z2;w3) y Loss(y,t) t f4(x2;w4)x2 z4 f3([z2,z4]; w3)
  5. 5. Example: multilayer perceptron • Classic “neural net” architecture — a powerful nonlinear function approximator • Zero or more fully connected (“dense”) layers of “neurons” • ex. neuron: h = f(Wx + b) for some nonlinearity f (e.g., ReLu(a) = max(a, 0)) • Predict y from fixed-size, not-too-large x with no structure • Classify digits in MNIST (digits are generally centered and upright) • Model risk of mortality in patients with pneumonia • Special case: logistic regression (zero hidden layers) http://deeplearning4j.org/mnist-for-beginners
  6. 6. Variation of MLP: autoencoder • “Unsupervised” training: no separate target y • Learns to accurately reconstruct x from succinct latent z • Probabilistic generative variants (e.g., deep belief net) can generate novel x’s by first sampling z from prior probability distribution p(z) http://deeplearning4j.org/deepautoencoder
  7. 7. Example: convolutional (neural) networks • Convolution layers “filter” x to extract features ➡ Filters exploit (spatially) local regularities while preserving spatial relationships • Subsampling (pooling) layers combine local information, reduce resolution ➡ pooling gives translational invariance (i.e., classifier robust to shifts in x) • Predict y from x with local structure (e.g., images, short time series) • 2D: classify images of, e.g., cats, cat may appear in different locations • 1D: diagnose patients from lab time series, symptoms at different times • Special case: fully convolutional network with no MLP at “top” (filter for variable-sized x’s) http://deeplearning4j.org/convolutionalnets 63 CONVOLUTIONAL NET Share the same parameters across different locations: Convolutions with learned kernels Ranzato (CVPR 2012 Tutorial, pt. 3 by M.A. Ranzato) http://deeplearning.net/tutorial/lenet.html
  8. 8. Example: recurrent neural networks • Recurrent connections between hidden units: ht+1 = f(Wx + Vht) • Gives neural net a form of memory for capturing longterm dependencies • More elaborate RNNs (LSTMs) learn when/what to remember or forget • Predict y from sequential x (natural language, video, time series) • Among most flexible and powerful learning algorithms available • Also can be most challenging to train http://deeplearning4j.org/recurrentnetwork
  9. 9. RNNs: flexible input-to-output modeling • Diagnose patients from temporal data (Lipton/Kale, ICLR 2016) • Predict next word or character (language modeling) • Generate beer review from category, score (Strata NY talk) • Translate from English to French (machine translation) http://karpathy.github.io/2015/05/21/rnn-effectiveness/
  10. 10. Let’s get crazy with architectures • How about automatically captioning videos? • Recall: we are just composing functions that transform inputs • Compose ConvNets with RNNs • You can do this with DL4J today! (Venugopalan, et al., NAACL 2015)
  11. 11. Machine learning in the deep learning era • Architecture design + hyperparameter tuning replace iterative feature engineering • Easier to transfer “knowledge” across problems • direct: can adapt generic image classifier into, e.g., tumor classifier • indirect: analogies across problems point to architectures • Often better able to leverage Big Data: • start with high capacity neural net • add regularization and tuning • None of the following is true: • your Big Data problems will all be solved magically • the machines are going to take over • the Singularity is right around the corner
  12. 12. DL4J architecture for image captioning Shawn marries Maya’s Mom. Mr. Feeny officiates. LSTM MLP Conv Shawn marries Mr. Feeny. Some lady is there. Loss DL4J: MultilayerNetwork }DL4J: ConvolutionLayer DL4J: Dense DL4J: GravesLSTM DL4J: RnnOutputLayer DataVec: RecordReader ND4J: LossFunction DL4J: OptimizationAlgorithm Backpropagation
  13. 13. DL4J ecosystem for scalable DL Arbiter • Platform agnostic model evaluation • Includes randomized grid search Spark API • Spark API wraps core DL4J classes • Designing and configuring model architecture identical • Currently provides data parallelism • Scales to massive datasets • Accelerated, distributed training • DataVec compatible with Spark RDDs Core • Efficient numpy-like numerical framework (ND4J) • ND4J backends for CUDA, ATLAS, MKL, OpenBLAS • Multi-GPU
  14. 14. Scalable DL with Spark API • Use Downpour SGD model from (Dean, et al. NIPS 2012) • Data parallelism • training data sharded across workers • workers each have complete model, train in parallel on disjoint minibatches • Parameter averaging • Master stores “canonical” model parameters • Workers send parameter updates (gradients) to master • Workers periodically ask for updated parameters from master
  15. 15. Example: LeNet image classifier LeNet on github
  16. 16. Example: train LeNet on multi-GPU server multi-GPU example on github
  17. 17. Example: distributed training of LeNet on Spark Spark LeNet example on github … …
  18. 18. DL4J versus… • For comparison of frameworks, see • DL4J comparison page • Karpathy lecture • A zillion billion other blog posts and articles
  19. 19. DL4J versus…my two cents • Using Java Big Data ecosystem (Hadoop, Spark, etc.): DL4J • Want robust data preprocessing tools/pipelines: DL4J • esp. natural language, images, video • Custom layers, loss functions, etc.: Theano/TF + keras/lasagne • grad student trying to publish NIPS papers • trying to win Kaggle competition with OpenAI model from NIPS (keras) • prototype an idea before implementing gradients by hand in DL4J • Use published CV models from Caffe zoo: Caffe • Python shop and don’t mind being hostage to Google Cloud: TF • Good news: this is a false choice, like most things (see Scala API)
  20. 20. • Scala API for DL4J that emulates keras user experience • Goal: reduce friction for going between keras and DL4J • make it easy to mimic keras architectures • load models keras-trained using common model format (coming soon) DL4J Scala API Preview
  21. 21. DL4J Scala API Keras DL4J Scala API Preview
  22. 22. Thank you! • DL4J: http://deeplearning4j.org/ • Skymind: https://skymind.io/ • Dave: • email: dave@skymind.io • Twitter: @davekale • website: http://www-scf.usc.edu/~dkale • MLHC Conference: http://mucmd.org • Ruben • email: ruben.fiszel@epfl.ch • website: http://rubenfiszel.github.io/ Gibson & Patterson. Deep Learning: A Practitioner’s Approach. O’Reilly, Q2 2016.
  • nourredineZaher

    Sep. 28, 2019
  • aberezin

    Aug. 14, 2016
  • ggdupont

    Aug. 11, 2016

David Kale and Ruben Fizsel from Skymind talk about deep learning for the JVM and enterprise using deeplearning4j (DL4J). Deep learning (nouveau neural nets) have sparked a renaissance in empirical machine learning with breakthroughs in computer vision, speech recognition, and natural language processing. However, many popular deep learning frameworks are targeted to researchers and poorly suited to enterprise settings that use Java-centric big data ecosystems. DL4J bridges the gap, bringing high performance numerical linear algebra libraries and state-of-the-art deep learning functionality to the JVM.

Views

Total views

1,241

On Slideshare

0

From embeds

0

Number of embeds

130

Actions

Downloads

33

Shares

0

Comments

0

Likes

3

×