Ersatz meetup - DeepLearning4j Demo


Published on

These slides accompanied a demo of Deeplearning4j, while the meetup explored distributed clustering and various deep learning explanations.

Deep-learning is useful in detecting anomalies like fraud, spam and money laundering; identifying similarities to augment search and text analytics; predicting customer lifetime value and churn; recognizing faces and voices.

Deeplearning4j is an infinitely scalable deep-learning architecture suitable for Hadoop and other big-data structures. It includes a distributed deep-learning framework and a normal deep-learning framework; i.e. it runs on a single thread as well. Training takes place in the cluster, which means it can process massive amounts of data. Nets are trained in parallel via iterative reduce, and they are equally compatible with Java, Scala and Clojure. The distributed deep-learning framework is made for data input and neural net training at scale, and its output should be highly accurate predictive models.

The framework's neural nets include restricted Boltzmann machines, deep-belief networks, deep autoencoders, convolutional nets and recursive neural tensor networks.

Published in: Engineering, Technology, Education

Ersatz meetup - DeepLearning4j Demo

  1. 1. { Deep Learning Machine Perception and Its Applications Adam Gibson // // zipfian academy
  2. 2.  Deep Learning is a subset of Machine Learning  Machine Learning is a subset of Artificial Intelligence  AI is nothing more than a collection of algorithms that repeatedly optimize themselves.  Deep learning is pattern recognition, a way for machines to classify what they perceive. DL is a subset of AI
  3. 3.  Deep learning algorithms are called neural nets. They are mathematical models.  They mirror the neurons of the human brain.  In the brain, sets of neurons learn to recognize certain patterns or phenomena, like faces, birdcalls or grammatical sequences.  These models have names like:  Restricted Boltzmann Machine  Deep-Belief Net  Convolutional Net  Stacked Denoising Autoencoder  Recursive Neural Tensor Network Deep learning’s algorithms
  4. 4.  Deep learning understands numbers, so anything that can be converted to numbers is fair game:  Digital media. Anything you can see or here. DL can analyze sights, sounds and text.  Sensor output. DL can work with data about temperature, pressure, motion and chemical composition.  Time-series data. DL handles prices and their movement over time; e.g. the stock market, real estate, weather and economic indicators. What DL can handle
  5. 5.  Recommendation engines: DL can identify patterns of human behavior and predict what you will want to buy.  Anomaly detection: DL can identify signals that indicate bad outcomes. It can point out fraud in e-commerce; tumors in X-rays; and loan applicants likely to default.  Signal processing: Deep learning can tell you what to expect, whether its customer lifetime value, how much inventory to stock, or whether the market on the verge of a flash crash. It has predictive capacity. What can you do with it?
  6. 6.  Faces can be represented by a collection of images.  Those images have persistent patterns of pixels.  Those pixel patterns are known as features; i.e. highly granular facial features.  Deep-learning nets learn to identify features in data, and use them to classify faces as faces and to label them by name; e.g. John or Sarah.  Nets train themselves by reconstructing faces from features again and again, and measuring their work against a benchmark. Facial recognition
  7. 7. Facial reconstructions…
  8. 8.  Deep learning networks learn from the data you feed them.  Initial data is known as the training set, and you know what it’s made of.  The net learns the faces of the training set by trying to reconstruct them, again and again.  Reconstruction is a process of finding which facial features are indicative of larger forms.  When a net can rebuild the training set, it is ready to work with unsupervised data. How did it do that?
  9. 9.  Nets measure the difference between what they produce and a benchmark you set.  They try to minimize that difference.  They do that by altering their own parameters – the way they treat the data – and testing how that affects their own results.  This test is known as a “loss function.” No really, how did it do that?
  10. 10. Learning looks like this.
  11. 11.  Facebook uses facial recognition to make itself stickier, and to know more about us.  Government agencies use facial recognition to secure national borders.  Video game makers use facial recognition to construct more realistic worlds.  Stores use it to identify customers and track behavior. What are faces for?
  12. 12.  Sentiment analysis is a form of Natural- Language Processing.  With it, software classifies the affective content of sentences, their emotional tone, bias and intensity.  Are they positive or negative about the subject in question?  This can be very useful in ranking movies, books, media and just about anything humans consume.  Including politicians. Sentiment Analysis & Text
  13. 13.  By reading sentiment, you read many things.  Corporations can measure customer satisfaction.  Governments can monitor popular unrest.  Event organizers can track audience engagement.  Employers can measure job applicant fit.  Celebrities can gauge fame and track scandal. Who cares what they say?
  14. 14.  Recurrent neural net  Restricted Boltzmann machine (RBM)  Deep-belief network: A stack of RBMs  Deep Autoencoder: 2 DBNs  Denoising Autoencoder (yay, noise!)  Convolutional net (ConvNet)  Recursive neural tensor network (RNTN) A Neural Nets Taxonomy
  15. 15.  Two layers of neuron-like nodes.  The first layer is the visible, or input, layer  The second is the hidden layer, which identifies features in the input  This simple network is symmetrically connected.  “Restricted” means there are no visible-visible or hidden-hidden connections; i.e. all connections happen *between* layers. Restricted Boltzmann Machine (RBMs)
  16. 16.  A deep-belief net is a stack of RBMs.  Each RBM’s hidden layer becomes the next RBM’s visible/input layer.  In this manner, a DBN learns more and more complex features  A machine vision example: 1) Pixels are input; 2) H1 learns an edge or line; 3) H2 learns a corner or set of lines; 4) H3 learns two groups of lines forming an object, maybe a face.  The final layer of a DBN classifies feature groups. It groups them in buckets: e.g. sunset, elephant, flower. Deep-belief net (DBN)
  17. 17.  A deep autoencoder consists of two DBNs.  The first DBN *encodes* the data into a vector of 10-30 numbers. This is pre-training.  The second DBN decodes the data into its original state.  Backprop happens solely on the second DBN  This is the fine-tuning stage and it’s carried out with reconstruction entropy.  Deep autoencoders will reduce any document or image to a highly compact vector.  Those vectors are useful in search, QA and information retrieval. Deep Autoencoder
  18. 18.  Autoencoders are useful for dimensionality reduction.  The risk they run is learning the identity function of the input.  Dropout is one way to address that risk.  Noise is another.  Noise is the stochastic, or random, corruption of the input.  The machine then learns features despite the noise. It “denoises” the input.  A stacked denoising encoder is exactly what you’d think.  Good for unsupervised pre-training, which initializes the weights. Denoising Autoencoder
  19. 19.  ConvNets are a type of RBM.  The difference is they’re asymmetric.  In an RBM, each node in the visible layer connects to each node in the hidden layer.  In a ConvNet, each node connects to the node straight ahead of it, and to the two others immediately to the right and left of it.  This means that ConvNets learn data like images in patches.  Each piece learned is then woven together in the whole. Convolutional Net
  20. 20.  Recursive nets are top-down, hierarchical nets rather than feed-forward like DBNs.  RNTNs handle sequence-based classification, windows of several events, entire scenes rather than images.  The features themselves are vectors.  A tensor is a multi-dimensional matrix, or multiple matrices of the same size. Recursive Neural Tensor Net
  21. 21. RNTNs & Scene Composition
  22. 22. RNTNs & Sentence Parsing