SPECIAL TOPICS
BIG DATA
DEEP LEARNING
1205012030 BÜŞRA İÇÖZ
1205012013 ŞEYMA NUR KARAYAĞLI
1205012016 SEVİL BÜŞRA KANLITEPE
OUTLINES
● Introduction
● Relationship between Artificial Intelligence, Machine Learning and Deep Learning
● Deep Learning
● Artificial Neural Network
● GPU in Deep Learning
● Deep Learning in Big data
● Applications
● Benefits and weaknesses
● Algorithms, Libraries and Tools
● Questions
INTRODUCTION
In the past 10 years, machine learning and Artificial Intelligence have shown
tremendous progress. The recent success can be attributed to:
• Explosion of data
• Cheap computing cost – CPUs and GPUs
• Improvement of machine learning models
• Much of the current excitement concerns a subfield of it called “deep learning”.
RELATIONSHIP BETWEEN ARTIFICIAL
INTELLIGENCE, MACHINE LEARNING AND DEEP
LEARNING
ML takes some of the core ideas of AI and
focuses them on solving real-world problems
with neural networks designed to mimic our
own decision-making. Deep Learning focuses
even more narrowly on a subset of ML tools and
techniques, and applies them to solving just
about any problem which requires “thought” –
human or artificial.
DEEP LEARNING
What is Deep Learning?
Deep Learning is a subfield of machine learning concerned with algorithms inspired
by the structure and function of the brain called artificial neural networks.
ARTIFICIAL NEURAL NETWORK
ARTIFICIAL NEURAL NETWORK
Neural Network
• Deep Learning is primarily about neural networks, where a network is
an interconnected web of nodes and edges.
• Neural nets were designed to perform complex tasks, such as the task
of placing objects into categories based on a few attributes.
• Neural nets are highly structured networks, and have three kinds of
layers - an input, an output, and so called hidden layers, which refer
to any layers between the input and the output layers.
• Each node (also called a neuron) in the hidden and output layers has
a classifier.
ARTIFICIAL NEURAL NETWORK
GPU IN DEEP LEARNING
Deep learning involves huge amount of matrix
multiplications and other operations which can
be massively parallelized and thus sped up on
GPU-s.
A single GPU might have thousands of cores
while a CPU usually has no more than 12 cores.
Although GPU cores are slower than CPU cores,
they more than make up for that with their
large number and faster memory if the
operations can be parallelized. Sequential code
is still faster on CPUs.
GPU IN DEEP LEARNING
GPU IN DEEP LEARNING
GPU IN DEEP LEARNING
What is more efficient than GPU?
DEEP LEARNING IN BIG DATA
A key benefit of Deep Learning is the analysis and learning of massive amounts of
unsupervised data, making it a valuable tool for Big Data Analytics where raw data is
largely unlabeled and un-categorized.
DEEP LEARNING IN BIG DATA
Web search/advertising Datacenter management Computer security
DEEP LEARNING IN BIG DATA
UNSUPERVISED (CLUSTERING)
• Data is not labeled, no prior knowledge
• Group points that are “close” to each other
• Identify structure or patterns in data
• Unknown number of classes
• Unsupervised learning
SUPERVISED (CLASSIFICATION)
• Labeled data points, based on a training set
• Want a “rule” that assigns labels to new points
• Known number of classes
• Used to classify future observation
• Supervised learning
APPLICATIONS
Computer vision: Find coffee mug
APPLICATIONS
APPLICATIONS
IMAGE: Face recognation and image capturing can be given as examples for deep
learning aplication.
APPLICATIONS
SPEECH: Speech recognation can be given an example.
APPLICATIONS
Bioinformatics:
Detecting mitosis in breast
cancer cells
Predicting the toxicity of new
drugs
Understanding gene
mutation to prevent disease
BENEFITS
Robust
• No need to design the features ahead of time – features are automatically
learned to be optimal for the task at hand
• Robustness to natural variations in the data is automatically learned
Generalizable
• The same neural net approach can be used for many different applications and data
types
Scalable
• Performance improves with more data, method is massively parallelizable
WEAKNESSES
• Deep Learning requires a large dataset, hence long training period.
• In term of cost, Machine Learning methods like SVMs and other tree ensembles are very
easily deployed even by relative machine learning novices and can usually get you
reasonably good results.
• Deep learning methods tend to learn everything. It’s better to encode prior knowledge
about structure of images (or audio or text).
• The learned features are often difficult to understand. Many vision features are also not
really human-understandable (e.g, concatenations/combinations of different features).
• Requires a good understanding of how to model multiple modalities with traditional
tools.
ALGORITHMS, LIBRARIES AND TOOLS
Platform
● Ersatz Labs - cloud-based deep learning platform [http://www.ersatz1.com/]
● H20 – deep learning framework that comes with R and Python interfaces
[http://www.h2o.ai/verticals/algos/deep-learning/]
Framework
● Caffe - deep learning framework made with expression, speed, and modularity in mind.
Developed by the Berkeley Vision and Learning Center (BVLC)
[http://caffe.berkeleyvision.org/]
● Torch - scientific computing framework with wide support for machine learning
algorithms that puts GPUs first. Based on Lua programming language [http://torch.ch/]
Library
● Tensorflow - open source software library for numerical computation using data flow
graphs from Google [https://www.tensorflow.org/]
● Theano - a python library developed by Yoshua Bengio’s team
[http://deeplearning.net/software/theano/]
QUESTIONS
1) What is the relationship between Artificial Intelligence, Machine Learning and
Deep Learning?
2)Why we use GPU in deep learning over CPU?
3)What are the common using areas of Deep Learning?
4)What are the disadvantages of Deep Learning?
5)If you were going to make a new project using deep learning, what would this be?

Deep Learning

  • 1.
    SPECIAL TOPICS BIG DATA DEEPLEARNING 1205012030 BÜŞRA İÇÖZ 1205012013 ŞEYMA NUR KARAYAĞLI 1205012016 SEVİL BÜŞRA KANLITEPE
  • 2.
    OUTLINES ● Introduction ● Relationshipbetween Artificial Intelligence, Machine Learning and Deep Learning ● Deep Learning ● Artificial Neural Network ● GPU in Deep Learning ● Deep Learning in Big data ● Applications ● Benefits and weaknesses ● Algorithms, Libraries and Tools ● Questions
  • 3.
    INTRODUCTION In the past10 years, machine learning and Artificial Intelligence have shown tremendous progress. The recent success can be attributed to: • Explosion of data • Cheap computing cost – CPUs and GPUs • Improvement of machine learning models • Much of the current excitement concerns a subfield of it called “deep learning”.
  • 5.
    RELATIONSHIP BETWEEN ARTIFICIAL INTELLIGENCE,MACHINE LEARNING AND DEEP LEARNING
  • 6.
    ML takes someof the core ideas of AI and focuses them on solving real-world problems with neural networks designed to mimic our own decision-making. Deep Learning focuses even more narrowly on a subset of ML tools and techniques, and applies them to solving just about any problem which requires “thought” – human or artificial.
  • 7.
    DEEP LEARNING What isDeep Learning? Deep Learning is a subfield of machine learning concerned with algorithms inspired by the structure and function of the brain called artificial neural networks.
  • 8.
  • 9.
    ARTIFICIAL NEURAL NETWORK NeuralNetwork • Deep Learning is primarily about neural networks, where a network is an interconnected web of nodes and edges. • Neural nets were designed to perform complex tasks, such as the task of placing objects into categories based on a few attributes. • Neural nets are highly structured networks, and have three kinds of layers - an input, an output, and so called hidden layers, which refer to any layers between the input and the output layers. • Each node (also called a neuron) in the hidden and output layers has a classifier.
  • 10.
  • 11.
    GPU IN DEEPLEARNING Deep learning involves huge amount of matrix multiplications and other operations which can be massively parallelized and thus sped up on GPU-s. A single GPU might have thousands of cores while a CPU usually has no more than 12 cores. Although GPU cores are slower than CPU cores, they more than make up for that with their large number and faster memory if the operations can be parallelized. Sequential code is still faster on CPUs.
  • 12.
    GPU IN DEEPLEARNING
  • 13.
    GPU IN DEEPLEARNING
  • 14.
    GPU IN DEEPLEARNING What is more efficient than GPU?
  • 15.
    DEEP LEARNING INBIG DATA A key benefit of Deep Learning is the analysis and learning of massive amounts of unsupervised data, making it a valuable tool for Big Data Analytics where raw data is largely unlabeled and un-categorized.
  • 16.
    DEEP LEARNING INBIG DATA Web search/advertising Datacenter management Computer security
  • 17.
    DEEP LEARNING INBIG DATA UNSUPERVISED (CLUSTERING) • Data is not labeled, no prior knowledge • Group points that are “close” to each other • Identify structure or patterns in data • Unknown number of classes • Unsupervised learning SUPERVISED (CLASSIFICATION) • Labeled data points, based on a training set • Want a “rule” that assigns labels to new points • Known number of classes • Used to classify future observation • Supervised learning
  • 18.
  • 19.
  • 20.
    APPLICATIONS IMAGE: Face recognationand image capturing can be given as examples for deep learning aplication.
  • 21.
  • 22.
    APPLICATIONS Bioinformatics: Detecting mitosis inbreast cancer cells Predicting the toxicity of new drugs Understanding gene mutation to prevent disease
  • 23.
    BENEFITS Robust • No needto design the features ahead of time – features are automatically learned to be optimal for the task at hand • Robustness to natural variations in the data is automatically learned Generalizable • The same neural net approach can be used for many different applications and data types Scalable • Performance improves with more data, method is massively parallelizable
  • 24.
    WEAKNESSES • Deep Learningrequires a large dataset, hence long training period. • In term of cost, Machine Learning methods like SVMs and other tree ensembles are very easily deployed even by relative machine learning novices and can usually get you reasonably good results. • Deep learning methods tend to learn everything. It’s better to encode prior knowledge about structure of images (or audio or text). • The learned features are often difficult to understand. Many vision features are also not really human-understandable (e.g, concatenations/combinations of different features). • Requires a good understanding of how to model multiple modalities with traditional tools.
  • 25.
    ALGORITHMS, LIBRARIES ANDTOOLS Platform ● Ersatz Labs - cloud-based deep learning platform [http://www.ersatz1.com/] ● H20 – deep learning framework that comes with R and Python interfaces [http://www.h2o.ai/verticals/algos/deep-learning/] Framework ● Caffe - deep learning framework made with expression, speed, and modularity in mind. Developed by the Berkeley Vision and Learning Center (BVLC) [http://caffe.berkeleyvision.org/] ● Torch - scientific computing framework with wide support for machine learning algorithms that puts GPUs first. Based on Lua programming language [http://torch.ch/] Library ● Tensorflow - open source software library for numerical computation using data flow graphs from Google [https://www.tensorflow.org/] ● Theano - a python library developed by Yoshua Bengio’s team [http://deeplearning.net/software/theano/]
  • 28.
    QUESTIONS 1) What isthe relationship between Artificial Intelligence, Machine Learning and Deep Learning? 2)Why we use GPU in deep learning over CPU? 3)What are the common using areas of Deep Learning? 4)What are the disadvantages of Deep Learning? 5)If you were going to make a new project using deep learning, what would this be?