Slides from a talk I gave at the Machine Learning Tokyo meetup group on 20190318.
More info here: https://www.meetup.com/Machine-Learning-Tokyo/events/259467268/
Feel free to reach out if you ever need to build a computer vision system or need data labelled to train machine learning models :)
www.numberboost.com
VIP High Class Call Girls Bikaner Anushka 8250192130 Independent Escort Servi...
Machine Learning Tokyo - Deep Neural Networks for Video - NumberBoost
1. Deep
Neural
Networks
for VIDEO
Neither Proprietary nor Confidential – Please Distribute ;)
Alex Conway
alex @ numberboost.com
@ALXCNWY
MACHINE LEARNING TOKYO
MEETUP 2019.03.18
Neitherconfidential norproprietary- pleasedistribute;)
44. 44
1. Neural Network Crash Course
2. Convolutional Neural Networks
3. Recurrent Neural Networks
4. Object Detection
5. Show and tell
6. Edge Computing & Federated ML
7. SEBENZ.AI
45. Jeremy Howard & Rachel Thomas
http://course.fast.ai
Richard Socher’s Class on NLP (great RNN resource)
http://web.stanford.edu/class/cs224n/
Andrej Karpathy’s Class on Computer Vision (great CNN resource)
http://cs231n.github.io
Keras docs
https://keras.io/
Great Free Resources
47. What is a neuron?
47
• 3 inputs [x1,x2,x3]
• 3 weights [w1,w2,w3]
• 1 bias term b
• Element-wise multiply and sum: inputs & weights
• Add bias term
• Apply activation function f
• Output of neuron is just a number (like theinputs!)
52. What is a
52
Inputs outputs
hidden
layer 1
hidden
layer 2
hidden
layer 3
Note: Outputs of one layer are inputs into the next layer
This (non-convolutional) architecture is called a “multi-layered perceptron”
Neural Network?(Deep)
59. “How much
error increases
when we increase
this weight”
How does a neural network learn?
59
New
weight = Old
weight
Learning
rate-
Gradient of
Error with
respect to Weight
( )x
62. For much more detail, see:
1. Michael Nielson’s Neural Networks & Deep
Learning free online book
http://neuralnetworksanddeeplearning.com/chap1.html
2. Anrej Karpathy’s CS231n Notes
http://neuralnetworksanddeeplearning.com/chap1.html
62
Neural Networks
63. Why is ML Succeeding?
1. Algorithms
2. Compute
3. Data
110. Fine-tuning A CNN To Solve A New Problem
• Load pre-trained VGG network
• Remove final dense & softmax layers that predict 1000 ImageNet classes
• Replace with new dense & softmax layer to predict 8 categories
110
96.3% accuracy in under 2 minutes for
classifying products into categories
(WITH ONLY 3467TRAINING IMAGES!!1!)
Fine-tuning is awesome!
Insert obligatory brain analogy
113. Visual Similarity
113
• Chop off VGG dense layers and keep bottleneck features
• Add single dense layer with 256 activations
• “Predict” to get activation for each image
• Compute nearest neighbours in the space of these activations
https://memeburn.com/2017/06/spree-image-search/