2. ABOUT THE AUTHOR
• Data Scientist @ Vodafone
• Published Author of a Machine Learning Book
(https://www.amazon.in/Machine-Learning-
Cookbook-Python-Analytics/dp/9389898005 )
• Core contributor at TensorFlow
• Worked with Arizona State University and NASA
on a drone for Mars
• Guest Lecturer at multiple top-ranking colleges in
India
• Have multiple publication and applied patents
(both individual and from companies)
3.
4.
5. WHAT IS ARTIFICIAL INTELLIGENCE?
• Artificial Intelligence is the simulation of the human intelligence
processes by machine, especially computer systems
• Some popular specific application of AI are
• Expert systems,
• Natural Language Processing(Text processing),
• Speech Recognition(Audio -->Text),
• Machine/Computer Vision
• Machine Learning (ML) is a subsets of AI
6.
7.
8. WHAT IS MACHINE LEARNING?
Machine Learning was invented after Deep Learning
But ML was more popular as the performance of the models was way high compared to DL models
Machine learning and statistics are closely related fields
in terms of methods, but distinct in their principal goal:
Statistics draws population inferences from a sample, while
Machine Learning finds generalizable predictive patterns.
11. WHAT IS A
NEURON?
• Neurons (also called neurones or nerve cells)
are the fundamental units of the brain and
nervous system, the cells responsible for
receiving sensory input from the external world,
for sending motor commands to our muscles,
and for transforming and relaying the electrical
signals at every step in between. More than that,
their interactions define who we are as people.
• Neurons in deep learning models are nodes
through which data and computations flow.
Neurons work like this: They receive one or more
input signals. These input signals can come from
either the raw data set or from neurons
positioned at a previous layer of the neural net.
13. WHAT IS DEEP LEARNING?
• Deep learning is part of a broader family of machine learning
methods based on artificial neural networks with representation
learning.
• Learning can be supervised, semi-supervised or unsupervised.
14. 1. Perceptron 1 (Rosenblatt, 1958, 1962)
2. Adaptive linear element (Widrow and Hoff, 1960)
3. Neocognitron (Fukushima, 1980)
4. Early back-propagation network (Rumelhart et al., 1986b)
5. Recurrent neural network for speech recognition (Robinson
and Fallside, 1991)
6. Multilayer perceptron 1 for speech recognition (Bengio et al.,
1991)
7. Mean field sigmoid belief network (Saul et al., 1996)
8. LeNet-5 1 (LeCun et al., 1998b)
9. Echo state network (Jaeger and Haas, 2004)
10. Deep belief network (Hinton et al., 2006)
11. GPU-accelerated convolutional network (Chellapilla et al.,
2006)
12. Deep Boltzmann machine (Salakhutdinov and Hinton, 2009a)
13. GPU-accelerated deep belief network (Raina et al., 2009)
14. Unsupervised convolutional network (Jarrett et al., 2009)
15. GPU-accelerated multilayer perceptron (Ciresan et al., 2010)
16. OMP-1 network (Coates and Ng, 2011)
17. Distributed autoencoder (Le et al., 2012)
18. Multi-GPU convolutional network (Krizhevsky et al., 2012)
19. COTS HPC unsupervised convolutional network (Coates et al.,
2013)
20. GoogLeNet 2 (Szegedy et al., 2014a)
18. Learning to Learn Better
Generalization
Transfer Learning
One-shot Learning
Vision and image modelling
Image recognition
Visual Question Answering
Video recognition
Generating images
Written Language
Reading Comprehension
Language Modelling
Conversation
Translation
Spoken Language
Speech recognition
Music Information Retrieval
Instrumentals tracks recognition
Scientific and Technical Capabilities
Solving constrained, well-specified technical
problems
Reading technical papers
Solving real-world technical problems
Generating computer programs from specifications
Answering Science Exam Questions
Game Playing
Abstract Strategy Games
Real-time Video Games
Safety and Security
"Adversarial Examples" and
Manipulation of Classifiers
Safety for Reinforcement Learning
Agents
Automated Hacking Systems
Pedestrian Detection for self-driving
vehicles
Transparency, Explainability &
Interpretability
Fairness and Debiasing
Privacy Problems
22. ADVERSARIAL
EXAMPLES
THE REASON
ADVERSARIAL ATTACKS
CAN TRICK NEURAL
NETWORKS IS BECAUSE
THEY DO NOT “SEE” THE
SAME WAY WE DO.
THEY DO LEARN
RELATIONSHIPS IN
IMAGE DATA AND CAN
COME TO SIMILAR
CONCLUSIONS AS WE
DO WHEN CLASSIFYING,
BUT THEIR INTERNAL
MODELS ARE
DIFFERENT FROM OURS.
23.
24. ADVERSARIAL ATTACKS
• The reason adversarial attacks can
trick neural networks is because they
do not “see” the same way we do.
They do learn relationships in image
data and can come to similar
conclusions as we do when classifying,
but their internal models are different
from ours.
26. REFERENCE
• https://en.wikipedia.org/wiki/Machine_learning#Statistics
• What is a neuron? - Queensland Brain Institute - University of Queensland
(uq.edu.au)
• The differences between Artificial and Biological Neural Networks | by Richard
Nagyfi | Towards Data Science
• Deep Learning Neural Networks Explained in Plain English (freecodecamp.org)
Editor's Notes
Uber uses AI techniques to provide pricing and better route
Email uses two ways of AI, one to filter the SPAM mails and also provide smart reply
Google maps provide recommendations based on his earlier searches
Google assistant uses AI for speech recognition and NLP
portrait mode uses AI, face identification uses AI techniques
Swiggy optimizes the delivery schedule using AI
Google BERT algorithm makes better search
Amazon uses AI for route optimisation for parcel delivery
his work related tasks, he refers few applications
Bank fraud detection uses anomaly detection techniques
Google news uses clustering technique to group the news items
AI makes recommendation engine based on profile, on netflix and youtube