SlideShare a Scribd company logo
How can we train with
few data
Dive with Example, no math required!
davinnovation@gmail.com
For Research...
http://www.image-net.org/
http://cocodataset.org/#home
10M > datasets
330K > datasets
https://research.google.com/youtube8m/
8M > datasets
In real...
Where we are
Before do Something Fancy
http://blog.kaggle.com/2017/01/05/your-year-on-kaggle-most-memorable-
community-stats-from-2016/
on deep learning
1. Traditional Model
Before do Something Fancy
SVM 은 MDA, Logit, CBR 과 비교해서 우수한 예측력을 보였으며 .... 인공신경망과 비
슷한 수준의 높은 예측력을 나타낼 뿐만 아니라 인공신경망의 한계점으로 지적되었던
과대 적합, 국소 최적화와 같은 한계점들을 완화하는 장점을 가진다. (2003 한인구)
http://www.aistudy.co.kr/pattern/support_vector_machine.htm
https://www.researchgate.net/post/Which_classifier_is_the_better_in_case_of_small_data_s
amples
그냥 일단 먼저 다른 모델(SVM) 돌려보세요...
on deep learning
1. Traditional Model
Before do Something Fancy on deep learning
2. Data Augmentation
For Image
https://github.com/aleju/imgaug
For Audio
https://github.com/bmcfee/muda
For others
put some money
Wait... Why Deep Learning?
https://www.quora.com/Why-is-xgboost-given-so-much-less-attention-than-deep-
learning-despite-its-ubiquity-in-winning-Kaggle-solutions
When you do have "enough" training data, and when somehow you manage to
find the matching magical deep architecture, deep learning blows away any other
method by a large margin.
Will you still do it?
How can we train with
few data
Dive with Example, no math required!
ETRI 두스 2018 _ 2차 스터디
davinnovation@gmail.com
In deep learning perspective
Assumption
1. Dataset is small
2. Not (worked||satisfied) SVM(+etc) || want some cool thing
후에 소개되는 방법론들이
앞에서 설명한 결과보다 좋아진다는 보장은 없음!!!
It’s just the other tools. Not magic wand
Approaches
if (data size is small) and not (satisfied SVM):
if (sufficient label):
Fine Tuning
elif (few labels):
N-shot Learning
elif (no labels):
Zero-shot Learning/Domain Adaptation
elif (skewed labels):
Anomaly Detection if special else Training Tricks!
else:
Hire Alba for labeling!
else:
if !(sufficient label):
semi-supervised learning, unsupervised learning
else:
JUST DO DEEP LEARNING!
Approaches
if (data size is small) and not (satisfied SVM):
if (sufficient label):
Fine Tuning
elif (few labels):
N-shot Learning
elif (no labels):
Zero-shot Learning/Domain Adaptation
elif (skewed labels):
Anomaly Detection if special else Training Tricks!
else:
Hire Alba for labeling!
else:
if !(sufficient label):
semi-supervised learning, unsupervised learning
else:
JUST DO DEEP LEARNING!
Transfer
Learning
Uncertainty
Learning Method
Transfer Learning
== Knowledge transfer
Pan, Sinno Jialin, and Qiang Yang. "A survey on transfer learning." IEEE Transactions on knowledge and data
engineering 22.10 (2010): 1345-1359.
Before we dive into models...
Transfer Learning
== Knowledge transfer
This can be from [ImageNet, ...]
trained model
TRAIN
YOUR
DATA
Transfer Learning Applications
Learning from simulation This is not related to TITLE but...
Transfer Learning Applications
Curriculum Learning / Learn Different Domain This is not related to TITLE but...
Transfer Learning
== Knowledge transfer
Transfer Learning
== Knowledge transfer
Transfer Types
Instance-transfer re-weight (source-data trained)model by target-data
== min(model(source)) -> min(model(target))
Feature-representation
transfer
find good feature representation for source & target
== min ( model_feature(source).variation - model_feature(target).variation )
Parameter-transfer discover share parameter between source & target
== 1/2 ( model_feature(source).weight + model_feature(target).weight )
Relational-knowledge-
transfer
build mapping of relational knowledge between source & target
learn source - > learn target
learn source & target same time
model weight perspective...
learn some relational info
FINE TUNING
if (sufficient label)
if (data size is small) and not (satisfied SVM):
https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html
10M > datasets
Fixed Feature Extractor
Fine-tuning
In deep learning perspective with small data
http://cs231n.github.io/transfer-learning/
FINE TUNING
if (sufficient label)
if (data size is small) and not (satisfied SVM):
In deep learning perspective with small data
SC : Sparse Coding ( Scratch )
TF : Transfer Learning
CTL : Complete TL
PTL : Partial TL
MTL : Multi-task TL
http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7480825 http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7064414
16220 images
FINE TUNING
if (sufficient label)
if (data size is small) and not (satisfied SVM):
https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html
10M > datasets
https://arxiv.org/pdf/1311.2901v3.pdf
each layer can extract Features
Why fine-tune can work on deep learning
In deep learning perspective with small data
FINE TUNING
if (sufficient label)
if (data size is small) and not (satisfied SVM):
https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html
10M > datasets
https://arxiv.org/pdf/1411.1792.pdf
How much we fine tune?
In deep learning perspective with small data
Multi-task learning
if (sufficient label)
if (data size is small) and not (satisfied SVM):
In deep learning perspective with small data
https://openreview.net/pdf?id=S1PWi_lC-
70,000
70,000
70,000
N-Shot
elif (few labels)
if (data size is small) and not (satisfied SVM):
In deep learning perspective with small data
N-Shot ( One shot is more familiar )
Human Deep Learning
N-Shot
elif (few labels)
if (data size is small) and not (satisfied SVM):
In deep learning perspective with small data
N-Shot ( One shot is more familiar )
single one picture book 60,000 train data (MNIST)
( Actually is NOT! but just for fun...)
N-Shot
elif (few labels)
if (data size is small) and not (satisfied SVM):
In deep learning perspective with small data
N-Shot ( One shot is more familiar ) Meta-Learning
Meta Learning
Learning to Learn
http://bair.berkeley.edu/blog/2017/07/18/lear
ning-to-learn/
N-Shot
elif (few labels)
if (data size is small) and not (satisfied SVM):
In deep learning perspective with small data
N-Shot ( One shot is more familiar ) Memory model ( Neural Turing Machine )
RNN Memory
https://www.slideshare.net/ssuserafc864/one-shot-learning-deep-learning-
meta-learn
N-Shot
elif (few labels)
if (data size is small) and not (satisfied SVM):
In deep learning perspective with small data
N-Shot ( One shot is more familiar ) Memory model ( Neural Turing Machine )
One-shot Learn with Meta
https://arxiv.org/pdf/1605.06065.pdf
Omniglot Dataset : 1600 > classes
 1200 class train, 423 class test ( downscale to 20x20 )
+ plus rotate augmentation
N-Shot
elif (few labels)
if (data size is small) and not (satisfied SVM):
In deep learning perspective with small data
N-Shot ( One shot is more familiar ) Metric Learning Perspective
Metric Learning : learns Feature Extract + Features Manifold
https://drive.google.com/file/d/1kDedrnO4N2l9RATSXRS0FuAZqW1mHPWu/view
Metric Learning
N-Shot
elif (few labels)
if (data size is small) and not (satisfied SVM):
In deep learning perspective with small data
N-Shot ( One shot is more familiar ) Metric Learning Perspective
https://drive.google.com/file/d/1kDedrnO4N2l9RATSXRS0FuAZqW1mHPWu/view
Metric Learning
N-Shot
elif (few labels)
if (data size is small) and not (satisfied SVM):
In deep learning perspective with small data
N-Shot ( One shot is more familiar ) Metric Learning Perspective
https://drive.google.com/file/d/1kDedrnO4N2l9RATSXRS0FuAZqW1mHPWu/view
One-Shot Learn with metric
60, 000 color images of size 84 × 84 with 100 classes
NOT LEARN CLASS
LEARNS metrics
https://arxiv.org/pdf/1606.04080.pdf
N-Shot
elif (few labels)
if (data size is small) and not (satisfied SVM):
In deep learning perspective with small data
N-Shot ( One shot is more familiar ) Metric Learning Perspective
https://drive.google.com/file/d/1kDedrnO4N2l9RATSXRS0FuAZqW1mHPWu/view
One-Shot Learn with metric
Zero-Shot
elif (no labels)
if (data size is small) and not (satisfied SVM):
In deep learning perspective with small data
Zero-shot Metric Learning Perspective
https://drive.google.com/file/d/1kDedrnO4N2l9RATSXRS0FuAZqW1mHPWu/view
Zero-shot Learn with metric
Zero-Shot
elif (no labels)
if (data size is small) and not (satisfied SVM):
In deep learning perspective with small data
Zero-shot Metric Learning Perspective
https://drive.google.com/file/d/1kDedrnO4N2l9RATSXRS0FuAZqW1mHPWu/view
Zero-shot Learn with metric
Domain Adaptation
elif (no labels)
if (data size is small) and not (satisfied SVM):
In deep learning perspective with small data
No Domain
Adaptation
Domain Adaptation
elif (no labels)
if (data size is small) and not (satisfied SVM):
In deep learning perspective with small data
Class = Backpack
Amazon
DSLR
Webcam
Caltech
 Training
Domain Adaptation
elif (no labels)
if (data size is small) and not (satisfied SVM):
In deep learning perspective with small data
Domain-Adversarial Neural Network
Classifier의 성능을 유지하면서
(Classifier)
source || target feature 분포도 고려,
Source에서 왔는지 Target에서 왔는지 알
수 없도록 방해
(GAN)
Domain Adaptation
elif (no labels)
if (data size is small) and not (satisfied SVM):
In deep learning perspective with small data
Domain-Adversarial Neural Network
Domain Adaptation
elif (no labels)
if (data size is small) and not (satisfied SVM):
In deep learning perspective with small data
Domain Separation Networks
이미지 복원값
shared encoder와 차이classification loss
2개 difference를 비슷하게 만들어줌
Domain Adaptation
elif (no labels)
if (data size is small) and not (satisfied SVM):
In deep learning perspective with small data
Domain Separation Networks
Source-only : Training with only source data
Target-only : Training with only Target data Testing on target data
SVHN
GTSRBMNIST
Anomaly Detection (Novelty Detection)
elif (skewed labels) and (special case)
if (data size is small) and not (satisfied SVM):
In deep learning perspective with small data
https://www.youtube.com/watch?v=hHHmWmJG9Rw
It’s look like Zero – shot learn
Anomaly Detection (Novelty Detection)
elif (skewed labels) and (special case)
if (data size is small) and not (satisfied SVM):
In deep learning perspective with small data
https://www.datascience.com/blog/python-
anomaly-detection
Gaussian Distribution -> Check Uncertainty!!!
Anomaly Detection (Novelty Detection)
elif (skewed labels) and (special case)
if (data size is small) and not (satisfied SVM):
In deep learning perspective with small data
safe visual navigation via deep learning https://www.slideshare.net/samchoi7/modeling-uncertainty-in-deep-learning
Training Skills
elif (skewed labels)
if (data size is small) and not (satisfied SVM):
Model weight update with balance
https://arxiv.org/pdf/1710.05381.pdf : imbalance class effect
Stratified Sampling / Bootstrapping... / K-Fold...
Training Skills
elif (skewed labels)
if (data size is small) and not (satisfied SVM):
Don’t see accuracy
Or...
Refrence
• https://medium.com/nanonets/nanonets-how-to-
use-deep-learning-when-you-have-limited-data-
f68c0b512cab
• https://medium.com/@ShaliniAnanda1/an-open-
letter-to-yann-lecun-22b244fc0a5a
• http://ruder.io/transfer-learning/
• ........
• harsh to refer all thing
Above Things...
• Transfer Learning
• http://ruder.io/transfer-learning/
• One-Shot/Zero-Shot Learning
• http://bair.berkeley.edu/blog/2017/07/18/learning-to-learn/
• Uncertainty Deep Learning
• https://www.slideshare.net/samchoi7/modeling-uncertainty-
in-deep-learning
• Why Transfer Learning reduces require data?
• https://medium.com/nanonets/nanonets-how-to-use-deep-
learning-when-you-have-limited-data-f68c0b512cab

More Related Content

What's hot

Predictive analytics semi-supervised learning with GANs
Predictive analytics   semi-supervised learning with GANsPredictive analytics   semi-supervised learning with GANs
Predictive analytics semi-supervised learning with GANs
terek47
 
kaggle NFL 1st and Future - Impact Detection
kaggle NFL 1st and Future - Impact Detectionkaggle NFL 1st and Future - Impact Detection
kaggle NFL 1st and Future - Impact Detection
Kazuyuki Miyazawa
 
【DL輪読会】ViT + Self Supervised Learningまとめ
【DL輪読会】ViT + Self Supervised Learningまとめ【DL輪読会】ViT + Self Supervised Learningまとめ
【DL輪読会】ViT + Self Supervised Learningまとめ
Deep Learning JP
 
Using Optimal Learning to Tune Deep Learning Pipelines
Using Optimal Learning to Tune Deep Learning PipelinesUsing Optimal Learning to Tune Deep Learning Pipelines
Using Optimal Learning to Tune Deep Learning Pipelines
Scott Clark
 
Support vector machines
Support vector machinesSupport vector machines
Support vector machines
manaswinimysore
 
Intelligent Thumbnail Selection
Intelligent Thumbnail SelectionIntelligent Thumbnail Selection
Intelligent Thumbnail Selection
Kamil Sindi
 
Why is Deep learning hot right now? and How can we apply it on each day job?
Why is Deep learning hot right now? and How can we apply it on each day job?Why is Deep learning hot right now? and How can we apply it on each day job?
Why is Deep learning hot right now? and How can we apply it on each day job?
Issam AlZinati
 

What's hot (7)

Predictive analytics semi-supervised learning with GANs
Predictive analytics   semi-supervised learning with GANsPredictive analytics   semi-supervised learning with GANs
Predictive analytics semi-supervised learning with GANs
 
kaggle NFL 1st and Future - Impact Detection
kaggle NFL 1st and Future - Impact Detectionkaggle NFL 1st and Future - Impact Detection
kaggle NFL 1st and Future - Impact Detection
 
【DL輪読会】ViT + Self Supervised Learningまとめ
【DL輪読会】ViT + Self Supervised Learningまとめ【DL輪読会】ViT + Self Supervised Learningまとめ
【DL輪読会】ViT + Self Supervised Learningまとめ
 
Using Optimal Learning to Tune Deep Learning Pipelines
Using Optimal Learning to Tune Deep Learning PipelinesUsing Optimal Learning to Tune Deep Learning Pipelines
Using Optimal Learning to Tune Deep Learning Pipelines
 
Support vector machines
Support vector machinesSupport vector machines
Support vector machines
 
Intelligent Thumbnail Selection
Intelligent Thumbnail SelectionIntelligent Thumbnail Selection
Intelligent Thumbnail Selection
 
Why is Deep learning hot right now? and How can we apply it on each day job?
Why is Deep learning hot right now? and How can we apply it on each day job?Why is Deep learning hot right now? and How can we apply it on each day job?
Why is Deep learning hot right now? and How can we apply it on each day job?
 

Similar to How can we train with few data

Fast Distributed Online Classification
Fast Distributed Online ClassificationFast Distributed Online Classification
Fast Distributed Online Classification
Prasad Chalasani
 
B4UConference_machine learning_deeplearning
B4UConference_machine learning_deeplearningB4UConference_machine learning_deeplearning
B4UConference_machine learning_deeplearning
Hoa Le
 
ML crash course
ML crash courseML crash course
ML crash course
mikaelhuss
 
Fast Distributed Online Classification
Fast Distributed Online Classification Fast Distributed Online Classification
Fast Distributed Online Classification
DataWorks Summit/Hadoop Summit
 
2020 04 10 Catch IT - Getting started with ML.Net
2020 04 10 Catch IT - Getting started with ML.Net2020 04 10 Catch IT - Getting started with ML.Net
2020 04 10 Catch IT - Getting started with ML.Net
Bruno Capuano
 
How to use transfer learning to bootstrap image classification and question a...
How to use transfer learning to bootstrap image classification and question a...How to use transfer learning to bootstrap image classification and question a...
How to use transfer learning to bootstrap image classification and question a...
Wee Hyong Tok
 
Python Machine Learning - Getting Started
Python Machine Learning - Getting StartedPython Machine Learning - Getting Started
Python Machine Learning - Getting Started
Rafey Iqbal Rahman
 
in5490-classification (1).pptx
in5490-classification (1).pptxin5490-classification (1).pptx
in5490-classification (1).pptx
MonicaTimber
 
Mariia Havrylovych "Active learning and weak supervision in NLP projects"
Mariia Havrylovych "Active learning and weak supervision in NLP projects"Mariia Havrylovych "Active learning and weak supervision in NLP projects"
Mariia Havrylovych "Active learning and weak supervision in NLP projects"
Fwdays
 
2020 04 04 NetCoreConf - Machine Learning.Net
2020 04 04 NetCoreConf - Machine Learning.Net2020 04 04 NetCoreConf - Machine Learning.Net
2020 04 04 NetCoreConf - Machine Learning.Net
Bruno Capuano
 
2020 09 24 - CONDG ML.Net
2020 09 24 - CONDG ML.Net2020 09 24 - CONDG ML.Net
2020 09 24 - CONDG ML.Net
Bruno Capuano
 
Chapter01.ppt
Chapter01.pptChapter01.ppt
Chapter01.pptbutest
 
Certification Study Group - NLP & Recommendation Systems on GCP Session 5
Certification Study Group - NLP & Recommendation Systems on GCP Session 5Certification Study Group - NLP & Recommendation Systems on GCP Session 5
Certification Study Group - NLP & Recommendation Systems on GCP Session 5
gdgsurrey
 
Analysis using r
Analysis using rAnalysis using r
Analysis using r
Priya Mohan
 
2019 12 19 Mississauga .Net User Group - Machine Learning.Net and Auto ML
2019 12 19 Mississauga .Net User Group - Machine Learning.Net and Auto ML2019 12 19 Mississauga .Net User Group - Machine Learning.Net and Auto ML
2019 12 19 Mississauga .Net User Group - Machine Learning.Net and Auto ML
Bruno Capuano
 
2020 01 21 Data Platform Geeks - Machine Learning.Net
2020 01 21 Data Platform Geeks - Machine Learning.Net2020 01 21 Data Platform Geeks - Machine Learning.Net
2020 01 21 Data Platform Geeks - Machine Learning.Net
Bruno Capuano
 
Knowledge graphs, meet Deep Learning
Knowledge graphs, meet Deep LearningKnowledge graphs, meet Deep Learning
Knowledge graphs, meet Deep Learning
Connected Data World
 
The Frontier of Deep Learning in 2020 and Beyond
The Frontier of Deep Learning in 2020 and BeyondThe Frontier of Deep Learning in 2020 and Beyond
The Frontier of Deep Learning in 2020 and Beyond
NUS-ISS
 
Strata London - Deep Learning 05-2015
Strata London - Deep Learning 05-2015Strata London - Deep Learning 05-2015
Strata London - Deep Learning 05-2015
Turi, Inc.
 
OReilly AI Transfer Learning
OReilly AI Transfer LearningOReilly AI Transfer Learning
OReilly AI Transfer Learning
Danielle Dean
 

Similar to How can we train with few data (20)

Fast Distributed Online Classification
Fast Distributed Online ClassificationFast Distributed Online Classification
Fast Distributed Online Classification
 
B4UConference_machine learning_deeplearning
B4UConference_machine learning_deeplearningB4UConference_machine learning_deeplearning
B4UConference_machine learning_deeplearning
 
ML crash course
ML crash courseML crash course
ML crash course
 
Fast Distributed Online Classification
Fast Distributed Online Classification Fast Distributed Online Classification
Fast Distributed Online Classification
 
2020 04 10 Catch IT - Getting started with ML.Net
2020 04 10 Catch IT - Getting started with ML.Net2020 04 10 Catch IT - Getting started with ML.Net
2020 04 10 Catch IT - Getting started with ML.Net
 
How to use transfer learning to bootstrap image classification and question a...
How to use transfer learning to bootstrap image classification and question a...How to use transfer learning to bootstrap image classification and question a...
How to use transfer learning to bootstrap image classification and question a...
 
Python Machine Learning - Getting Started
Python Machine Learning - Getting StartedPython Machine Learning - Getting Started
Python Machine Learning - Getting Started
 
in5490-classification (1).pptx
in5490-classification (1).pptxin5490-classification (1).pptx
in5490-classification (1).pptx
 
Mariia Havrylovych "Active learning and weak supervision in NLP projects"
Mariia Havrylovych "Active learning and weak supervision in NLP projects"Mariia Havrylovych "Active learning and weak supervision in NLP projects"
Mariia Havrylovych "Active learning and weak supervision in NLP projects"
 
2020 04 04 NetCoreConf - Machine Learning.Net
2020 04 04 NetCoreConf - Machine Learning.Net2020 04 04 NetCoreConf - Machine Learning.Net
2020 04 04 NetCoreConf - Machine Learning.Net
 
2020 09 24 - CONDG ML.Net
2020 09 24 - CONDG ML.Net2020 09 24 - CONDG ML.Net
2020 09 24 - CONDG ML.Net
 
Chapter01.ppt
Chapter01.pptChapter01.ppt
Chapter01.ppt
 
Certification Study Group - NLP & Recommendation Systems on GCP Session 5
Certification Study Group - NLP & Recommendation Systems on GCP Session 5Certification Study Group - NLP & Recommendation Systems on GCP Session 5
Certification Study Group - NLP & Recommendation Systems on GCP Session 5
 
Analysis using r
Analysis using rAnalysis using r
Analysis using r
 
2019 12 19 Mississauga .Net User Group - Machine Learning.Net and Auto ML
2019 12 19 Mississauga .Net User Group - Machine Learning.Net and Auto ML2019 12 19 Mississauga .Net User Group - Machine Learning.Net and Auto ML
2019 12 19 Mississauga .Net User Group - Machine Learning.Net and Auto ML
 
2020 01 21 Data Platform Geeks - Machine Learning.Net
2020 01 21 Data Platform Geeks - Machine Learning.Net2020 01 21 Data Platform Geeks - Machine Learning.Net
2020 01 21 Data Platform Geeks - Machine Learning.Net
 
Knowledge graphs, meet Deep Learning
Knowledge graphs, meet Deep LearningKnowledge graphs, meet Deep Learning
Knowledge graphs, meet Deep Learning
 
The Frontier of Deep Learning in 2020 and Beyond
The Frontier of Deep Learning in 2020 and BeyondThe Frontier of Deep Learning in 2020 and Beyond
The Frontier of Deep Learning in 2020 and Beyond
 
Strata London - Deep Learning 05-2015
Strata London - Deep Learning 05-2015Strata London - Deep Learning 05-2015
Strata London - Deep Learning 05-2015
 
OReilly AI Transfer Learning
OReilly AI Transfer LearningOReilly AI Transfer Learning
OReilly AI Transfer Learning
 

More from Dong Heon Cho

Forward-Forward Algorithm
Forward-Forward AlgorithmForward-Forward Algorithm
Forward-Forward Algorithm
Dong Heon Cho
 
What is Texture.pdf
What is Texture.pdfWhat is Texture.pdf
What is Texture.pdf
Dong Heon Cho
 
BADGE
BADGEBADGE
Neural Radiance Field
Neural Radiance FieldNeural Radiance Field
Neural Radiance Field
Dong Heon Cho
 
2020 > Self supervised learning
2020 > Self supervised learning2020 > Self supervised learning
2020 > Self supervised learning
Dong Heon Cho
 
All about that pooling
All about that poolingAll about that pooling
All about that pooling
Dong Heon Cho
 
Background elimination review
Background elimination reviewBackground elimination review
Background elimination review
Dong Heon Cho
 
Transparent Latent GAN
Transparent Latent GANTransparent Latent GAN
Transparent Latent GAN
Dong Heon Cho
 
Image matting atoc
Image matting atocImage matting atoc
Image matting atoc
Dong Heon Cho
 
Multi object Deep reinforcement learning
Multi object Deep reinforcement learningMulti object Deep reinforcement learning
Multi object Deep reinforcement learning
Dong Heon Cho
 
Multi agent reinforcement learning for sequential social dilemmas
Multi agent reinforcement learning for sequential social dilemmasMulti agent reinforcement learning for sequential social dilemmas
Multi agent reinforcement learning for sequential social dilemmas
Dong Heon Cho
 
Multi agent System
Multi agent SystemMulti agent System
Multi agent System
Dong Heon Cho
 
Hybrid reward architecture
Hybrid reward architectureHybrid reward architecture
Hybrid reward architecture
Dong Heon Cho
 
Use Jupyter notebook guide in 5 minutes
Use Jupyter notebook guide in 5 minutesUse Jupyter notebook guide in 5 minutes
Use Jupyter notebook guide in 5 minutes
Dong Heon Cho
 
AlexNet and so on...
AlexNet and so on...AlexNet and so on...
AlexNet and so on...
Dong Heon Cho
 
Deep Learning AtoC with Image Perspective
Deep Learning AtoC with Image PerspectiveDeep Learning AtoC with Image Perspective
Deep Learning AtoC with Image Perspective
Dong Heon Cho
 
LOL win prediction
LOL win predictionLOL win prediction
LOL win prediction
Dong Heon Cho
 
Domain adaptation gan
Domain adaptation ganDomain adaptation gan
Domain adaptation gan
Dong Heon Cho
 
Dense sparse-dense training for dnn and Other Models
Dense sparse-dense training for dnn and Other ModelsDense sparse-dense training for dnn and Other Models
Dense sparse-dense training for dnn and Other Models
Dong Heon Cho
 
Squeeeze models
Squeeeze modelsSqueeeze models
Squeeeze models
Dong Heon Cho
 

More from Dong Heon Cho (20)

Forward-Forward Algorithm
Forward-Forward AlgorithmForward-Forward Algorithm
Forward-Forward Algorithm
 
What is Texture.pdf
What is Texture.pdfWhat is Texture.pdf
What is Texture.pdf
 
BADGE
BADGEBADGE
BADGE
 
Neural Radiance Field
Neural Radiance FieldNeural Radiance Field
Neural Radiance Field
 
2020 > Self supervised learning
2020 > Self supervised learning2020 > Self supervised learning
2020 > Self supervised learning
 
All about that pooling
All about that poolingAll about that pooling
All about that pooling
 
Background elimination review
Background elimination reviewBackground elimination review
Background elimination review
 
Transparent Latent GAN
Transparent Latent GANTransparent Latent GAN
Transparent Latent GAN
 
Image matting atoc
Image matting atocImage matting atoc
Image matting atoc
 
Multi object Deep reinforcement learning
Multi object Deep reinforcement learningMulti object Deep reinforcement learning
Multi object Deep reinforcement learning
 
Multi agent reinforcement learning for sequential social dilemmas
Multi agent reinforcement learning for sequential social dilemmasMulti agent reinforcement learning for sequential social dilemmas
Multi agent reinforcement learning for sequential social dilemmas
 
Multi agent System
Multi agent SystemMulti agent System
Multi agent System
 
Hybrid reward architecture
Hybrid reward architectureHybrid reward architecture
Hybrid reward architecture
 
Use Jupyter notebook guide in 5 minutes
Use Jupyter notebook guide in 5 minutesUse Jupyter notebook guide in 5 minutes
Use Jupyter notebook guide in 5 minutes
 
AlexNet and so on...
AlexNet and so on...AlexNet and so on...
AlexNet and so on...
 
Deep Learning AtoC with Image Perspective
Deep Learning AtoC with Image PerspectiveDeep Learning AtoC with Image Perspective
Deep Learning AtoC with Image Perspective
 
LOL win prediction
LOL win predictionLOL win prediction
LOL win prediction
 
Domain adaptation gan
Domain adaptation ganDomain adaptation gan
Domain adaptation gan
 
Dense sparse-dense training for dnn and Other Models
Dense sparse-dense training for dnn and Other ModelsDense sparse-dense training for dnn and Other Models
Dense sparse-dense training for dnn and Other Models
 
Squeeeze models
Squeeeze modelsSqueeeze models
Squeeeze models
 

Recently uploaded

Empowering Data Analytics Ecosystem.pptx
Empowering Data Analytics Ecosystem.pptxEmpowering Data Analytics Ecosystem.pptx
Empowering Data Analytics Ecosystem.pptx
benishzehra469
 
一比一原版(UVic毕业证)维多利亚大学毕业证成绩单
一比一原版(UVic毕业证)维多利亚大学毕业证成绩单一比一原版(UVic毕业证)维多利亚大学毕业证成绩单
一比一原版(UVic毕业证)维多利亚大学毕业证成绩单
ukgaet
 
Algorithmic optimizations for Dynamic Levelwise PageRank (from STICD) : SHORT...
Algorithmic optimizations for Dynamic Levelwise PageRank (from STICD) : SHORT...Algorithmic optimizations for Dynamic Levelwise PageRank (from STICD) : SHORT...
Algorithmic optimizations for Dynamic Levelwise PageRank (from STICD) : SHORT...
Subhajit Sahu
 
The affect of service quality and online reviews on customer loyalty in the E...
The affect of service quality and online reviews on customer loyalty in the E...The affect of service quality and online reviews on customer loyalty in the E...
The affect of service quality and online reviews on customer loyalty in the E...
jerlynmaetalle
 
Criminal IP - Threat Hunting Webinar.pdf
Criminal IP - Threat Hunting Webinar.pdfCriminal IP - Threat Hunting Webinar.pdf
Criminal IP - Threat Hunting Webinar.pdf
Criminal IP
 
一比一原版(BU毕业证)波士顿大学毕业证成绩单
一比一原版(BU毕业证)波士顿大学毕业证成绩单一比一原版(BU毕业证)波士顿大学毕业证成绩单
一比一原版(BU毕业证)波士顿大学毕业证成绩单
ewymefz
 
1.Seydhcuxhxyxhccuuxuxyxyxmisolids 2019.pptx
1.Seydhcuxhxyxhccuuxuxyxyxmisolids 2019.pptx1.Seydhcuxhxyxhccuuxuxyxyxmisolids 2019.pptx
1.Seydhcuxhxyxhccuuxuxyxyxmisolids 2019.pptx
Tiktokethiodaily
 
Malana- Gimlet Market Analysis (Portfolio 2)
Malana- Gimlet Market Analysis (Portfolio 2)Malana- Gimlet Market Analysis (Portfolio 2)
Malana- Gimlet Market Analysis (Portfolio 2)
TravisMalana
 
Machine learning and optimization techniques for electrical drives.pptx
Machine learning and optimization techniques for electrical drives.pptxMachine learning and optimization techniques for electrical drives.pptx
Machine learning and optimization techniques for electrical drives.pptx
balafet
 
Opendatabay - Open Data Marketplace.pptx
Opendatabay - Open Data Marketplace.pptxOpendatabay - Open Data Marketplace.pptx
Opendatabay - Open Data Marketplace.pptx
Opendatabay
 
一比一原版(YU毕业证)约克大学毕业证成绩单
一比一原版(YU毕业证)约克大学毕业证成绩单一比一原版(YU毕业证)约克大学毕业证成绩单
一比一原版(YU毕业证)约克大学毕业证成绩单
enxupq
 
Predicting Product Ad Campaign Performance: A Data Analysis Project Presentation
Predicting Product Ad Campaign Performance: A Data Analysis Project PresentationPredicting Product Ad Campaign Performance: A Data Analysis Project Presentation
Predicting Product Ad Campaign Performance: A Data Analysis Project Presentation
Boston Institute of Analytics
 
一比一原版(CBU毕业证)不列颠海角大学毕业证成绩单
一比一原版(CBU毕业证)不列颠海角大学毕业证成绩单一比一原版(CBU毕业证)不列颠海角大学毕业证成绩单
一比一原版(CBU毕业证)不列颠海角大学毕业证成绩单
nscud
 
standardisation of garbhpala offhgfffghh
standardisation of garbhpala offhgfffghhstandardisation of garbhpala offhgfffghh
standardisation of garbhpala offhgfffghh
ArpitMalhotra16
 
Adjusting primitives for graph : SHORT REPORT / NOTES
Adjusting primitives for graph : SHORT REPORT / NOTESAdjusting primitives for graph : SHORT REPORT / NOTES
Adjusting primitives for graph : SHORT REPORT / NOTES
Subhajit Sahu
 
一比一原版(UPenn毕业证)宾夕法尼亚大学毕业证成绩单
一比一原版(UPenn毕业证)宾夕法尼亚大学毕业证成绩单一比一原版(UPenn毕业证)宾夕法尼亚大学毕业证成绩单
一比一原版(UPenn毕业证)宾夕法尼亚大学毕业证成绩单
ewymefz
 
一比一原版(ArtEZ毕业证)ArtEZ艺术学院毕业证成绩单
一比一原版(ArtEZ毕业证)ArtEZ艺术学院毕业证成绩单一比一原版(ArtEZ毕业证)ArtEZ艺术学院毕业证成绩单
一比一原版(ArtEZ毕业证)ArtEZ艺术学院毕业证成绩单
vcaxypu
 
一比一原版(CU毕业证)卡尔顿大学毕业证成绩单
一比一原版(CU毕业证)卡尔顿大学毕业证成绩单一比一原版(CU毕业证)卡尔顿大学毕业证成绩单
一比一原版(CU毕业证)卡尔顿大学毕业证成绩单
yhkoc
 
一比一原版(Deakin毕业证书)迪肯大学毕业证如何办理
一比一原版(Deakin毕业证书)迪肯大学毕业证如何办理一比一原版(Deakin毕业证书)迪肯大学毕业证如何办理
一比一原版(Deakin毕业证书)迪肯大学毕业证如何办理
oz8q3jxlp
 
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...
Subhajit Sahu
 

Recently uploaded (20)

Empowering Data Analytics Ecosystem.pptx
Empowering Data Analytics Ecosystem.pptxEmpowering Data Analytics Ecosystem.pptx
Empowering Data Analytics Ecosystem.pptx
 
一比一原版(UVic毕业证)维多利亚大学毕业证成绩单
一比一原版(UVic毕业证)维多利亚大学毕业证成绩单一比一原版(UVic毕业证)维多利亚大学毕业证成绩单
一比一原版(UVic毕业证)维多利亚大学毕业证成绩单
 
Algorithmic optimizations for Dynamic Levelwise PageRank (from STICD) : SHORT...
Algorithmic optimizations for Dynamic Levelwise PageRank (from STICD) : SHORT...Algorithmic optimizations for Dynamic Levelwise PageRank (from STICD) : SHORT...
Algorithmic optimizations for Dynamic Levelwise PageRank (from STICD) : SHORT...
 
The affect of service quality and online reviews on customer loyalty in the E...
The affect of service quality and online reviews on customer loyalty in the E...The affect of service quality and online reviews on customer loyalty in the E...
The affect of service quality and online reviews on customer loyalty in the E...
 
Criminal IP - Threat Hunting Webinar.pdf
Criminal IP - Threat Hunting Webinar.pdfCriminal IP - Threat Hunting Webinar.pdf
Criminal IP - Threat Hunting Webinar.pdf
 
一比一原版(BU毕业证)波士顿大学毕业证成绩单
一比一原版(BU毕业证)波士顿大学毕业证成绩单一比一原版(BU毕业证)波士顿大学毕业证成绩单
一比一原版(BU毕业证)波士顿大学毕业证成绩单
 
1.Seydhcuxhxyxhccuuxuxyxyxmisolids 2019.pptx
1.Seydhcuxhxyxhccuuxuxyxyxmisolids 2019.pptx1.Seydhcuxhxyxhccuuxuxyxyxmisolids 2019.pptx
1.Seydhcuxhxyxhccuuxuxyxyxmisolids 2019.pptx
 
Malana- Gimlet Market Analysis (Portfolio 2)
Malana- Gimlet Market Analysis (Portfolio 2)Malana- Gimlet Market Analysis (Portfolio 2)
Malana- Gimlet Market Analysis (Portfolio 2)
 
Machine learning and optimization techniques for electrical drives.pptx
Machine learning and optimization techniques for electrical drives.pptxMachine learning and optimization techniques for electrical drives.pptx
Machine learning and optimization techniques for electrical drives.pptx
 
Opendatabay - Open Data Marketplace.pptx
Opendatabay - Open Data Marketplace.pptxOpendatabay - Open Data Marketplace.pptx
Opendatabay - Open Data Marketplace.pptx
 
一比一原版(YU毕业证)约克大学毕业证成绩单
一比一原版(YU毕业证)约克大学毕业证成绩单一比一原版(YU毕业证)约克大学毕业证成绩单
一比一原版(YU毕业证)约克大学毕业证成绩单
 
Predicting Product Ad Campaign Performance: A Data Analysis Project Presentation
Predicting Product Ad Campaign Performance: A Data Analysis Project PresentationPredicting Product Ad Campaign Performance: A Data Analysis Project Presentation
Predicting Product Ad Campaign Performance: A Data Analysis Project Presentation
 
一比一原版(CBU毕业证)不列颠海角大学毕业证成绩单
一比一原版(CBU毕业证)不列颠海角大学毕业证成绩单一比一原版(CBU毕业证)不列颠海角大学毕业证成绩单
一比一原版(CBU毕业证)不列颠海角大学毕业证成绩单
 
standardisation of garbhpala offhgfffghh
standardisation of garbhpala offhgfffghhstandardisation of garbhpala offhgfffghh
standardisation of garbhpala offhgfffghh
 
Adjusting primitives for graph : SHORT REPORT / NOTES
Adjusting primitives for graph : SHORT REPORT / NOTESAdjusting primitives for graph : SHORT REPORT / NOTES
Adjusting primitives for graph : SHORT REPORT / NOTES
 
一比一原版(UPenn毕业证)宾夕法尼亚大学毕业证成绩单
一比一原版(UPenn毕业证)宾夕法尼亚大学毕业证成绩单一比一原版(UPenn毕业证)宾夕法尼亚大学毕业证成绩单
一比一原版(UPenn毕业证)宾夕法尼亚大学毕业证成绩单
 
一比一原版(ArtEZ毕业证)ArtEZ艺术学院毕业证成绩单
一比一原版(ArtEZ毕业证)ArtEZ艺术学院毕业证成绩单一比一原版(ArtEZ毕业证)ArtEZ艺术学院毕业证成绩单
一比一原版(ArtEZ毕业证)ArtEZ艺术学院毕业证成绩单
 
一比一原版(CU毕业证)卡尔顿大学毕业证成绩单
一比一原版(CU毕业证)卡尔顿大学毕业证成绩单一比一原版(CU毕业证)卡尔顿大学毕业证成绩单
一比一原版(CU毕业证)卡尔顿大学毕业证成绩单
 
一比一原版(Deakin毕业证书)迪肯大学毕业证如何办理
一比一原版(Deakin毕业证书)迪肯大学毕业证如何办理一比一原版(Deakin毕业证书)迪肯大学毕业证如何办理
一比一原版(Deakin毕业证书)迪肯大学毕业证如何办理
 
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...
 

How can we train with few data

  • 1. How can we train with few data Dive with Example, no math required! davinnovation@gmail.com
  • 2. For Research... http://www.image-net.org/ http://cocodataset.org/#home 10M > datasets 330K > datasets https://research.google.com/youtube8m/ 8M > datasets
  • 5. Before do Something Fancy http://blog.kaggle.com/2017/01/05/your-year-on-kaggle-most-memorable- community-stats-from-2016/ on deep learning 1. Traditional Model
  • 6. Before do Something Fancy SVM 은 MDA, Logit, CBR 과 비교해서 우수한 예측력을 보였으며 .... 인공신경망과 비 슷한 수준의 높은 예측력을 나타낼 뿐만 아니라 인공신경망의 한계점으로 지적되었던 과대 적합, 국소 최적화와 같은 한계점들을 완화하는 장점을 가진다. (2003 한인구) http://www.aistudy.co.kr/pattern/support_vector_machine.htm https://www.researchgate.net/post/Which_classifier_is_the_better_in_case_of_small_data_s amples 그냥 일단 먼저 다른 모델(SVM) 돌려보세요... on deep learning 1. Traditional Model
  • 7. Before do Something Fancy on deep learning 2. Data Augmentation For Image https://github.com/aleju/imgaug For Audio https://github.com/bmcfee/muda For others put some money
  • 8. Wait... Why Deep Learning? https://www.quora.com/Why-is-xgboost-given-so-much-less-attention-than-deep- learning-despite-its-ubiquity-in-winning-Kaggle-solutions When you do have "enough" training data, and when somehow you manage to find the matching magical deep architecture, deep learning blows away any other method by a large margin. Will you still do it?
  • 9.
  • 10. How can we train with few data Dive with Example, no math required! ETRI 두스 2018 _ 2차 스터디 davinnovation@gmail.com In deep learning perspective
  • 11. Assumption 1. Dataset is small 2. Not (worked||satisfied) SVM(+etc) || want some cool thing
  • 12. 후에 소개되는 방법론들이 앞에서 설명한 결과보다 좋아진다는 보장은 없음!!! It’s just the other tools. Not magic wand
  • 13. Approaches if (data size is small) and not (satisfied SVM): if (sufficient label): Fine Tuning elif (few labels): N-shot Learning elif (no labels): Zero-shot Learning/Domain Adaptation elif (skewed labels): Anomaly Detection if special else Training Tricks! else: Hire Alba for labeling! else: if !(sufficient label): semi-supervised learning, unsupervised learning else: JUST DO DEEP LEARNING!
  • 14. Approaches if (data size is small) and not (satisfied SVM): if (sufficient label): Fine Tuning elif (few labels): N-shot Learning elif (no labels): Zero-shot Learning/Domain Adaptation elif (skewed labels): Anomaly Detection if special else Training Tricks! else: Hire Alba for labeling! else: if !(sufficient label): semi-supervised learning, unsupervised learning else: JUST DO DEEP LEARNING! Transfer Learning Uncertainty Learning Method
  • 15. Transfer Learning == Knowledge transfer Pan, Sinno Jialin, and Qiang Yang. "A survey on transfer learning." IEEE Transactions on knowledge and data engineering 22.10 (2010): 1345-1359. Before we dive into models...
  • 16. Transfer Learning == Knowledge transfer This can be from [ImageNet, ...] trained model TRAIN YOUR DATA
  • 17. Transfer Learning Applications Learning from simulation This is not related to TITLE but...
  • 18. Transfer Learning Applications Curriculum Learning / Learn Different Domain This is not related to TITLE but...
  • 20. Transfer Learning == Knowledge transfer Transfer Types Instance-transfer re-weight (source-data trained)model by target-data == min(model(source)) -> min(model(target)) Feature-representation transfer find good feature representation for source & target == min ( model_feature(source).variation - model_feature(target).variation ) Parameter-transfer discover share parameter between source & target == 1/2 ( model_feature(source).weight + model_feature(target).weight ) Relational-knowledge- transfer build mapping of relational knowledge between source & target learn source - > learn target learn source & target same time model weight perspective... learn some relational info
  • 21. FINE TUNING if (sufficient label) if (data size is small) and not (satisfied SVM): https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html 10M > datasets Fixed Feature Extractor Fine-tuning In deep learning perspective with small data http://cs231n.github.io/transfer-learning/
  • 22. FINE TUNING if (sufficient label) if (data size is small) and not (satisfied SVM): In deep learning perspective with small data SC : Sparse Coding ( Scratch ) TF : Transfer Learning CTL : Complete TL PTL : Partial TL MTL : Multi-task TL http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7480825 http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7064414 16220 images
  • 23. FINE TUNING if (sufficient label) if (data size is small) and not (satisfied SVM): https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html 10M > datasets https://arxiv.org/pdf/1311.2901v3.pdf each layer can extract Features Why fine-tune can work on deep learning In deep learning perspective with small data
  • 24. FINE TUNING if (sufficient label) if (data size is small) and not (satisfied SVM): https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html 10M > datasets https://arxiv.org/pdf/1411.1792.pdf How much we fine tune? In deep learning perspective with small data
  • 25. Multi-task learning if (sufficient label) if (data size is small) and not (satisfied SVM): In deep learning perspective with small data https://openreview.net/pdf?id=S1PWi_lC- 70,000 70,000 70,000
  • 26. N-Shot elif (few labels) if (data size is small) and not (satisfied SVM): In deep learning perspective with small data N-Shot ( One shot is more familiar ) Human Deep Learning
  • 27. N-Shot elif (few labels) if (data size is small) and not (satisfied SVM): In deep learning perspective with small data N-Shot ( One shot is more familiar ) single one picture book 60,000 train data (MNIST) ( Actually is NOT! but just for fun...)
  • 28. N-Shot elif (few labels) if (data size is small) and not (satisfied SVM): In deep learning perspective with small data N-Shot ( One shot is more familiar ) Meta-Learning Meta Learning Learning to Learn http://bair.berkeley.edu/blog/2017/07/18/lear ning-to-learn/
  • 29. N-Shot elif (few labels) if (data size is small) and not (satisfied SVM): In deep learning perspective with small data N-Shot ( One shot is more familiar ) Memory model ( Neural Turing Machine ) RNN Memory https://www.slideshare.net/ssuserafc864/one-shot-learning-deep-learning- meta-learn
  • 30. N-Shot elif (few labels) if (data size is small) and not (satisfied SVM): In deep learning perspective with small data N-Shot ( One shot is more familiar ) Memory model ( Neural Turing Machine ) One-shot Learn with Meta https://arxiv.org/pdf/1605.06065.pdf Omniglot Dataset : 1600 > classes  1200 class train, 423 class test ( downscale to 20x20 ) + plus rotate augmentation
  • 31. N-Shot elif (few labels) if (data size is small) and not (satisfied SVM): In deep learning perspective with small data N-Shot ( One shot is more familiar ) Metric Learning Perspective Metric Learning : learns Feature Extract + Features Manifold https://drive.google.com/file/d/1kDedrnO4N2l9RATSXRS0FuAZqW1mHPWu/view Metric Learning
  • 32. N-Shot elif (few labels) if (data size is small) and not (satisfied SVM): In deep learning perspective with small data N-Shot ( One shot is more familiar ) Metric Learning Perspective https://drive.google.com/file/d/1kDedrnO4N2l9RATSXRS0FuAZqW1mHPWu/view Metric Learning
  • 33. N-Shot elif (few labels) if (data size is small) and not (satisfied SVM): In deep learning perspective with small data N-Shot ( One shot is more familiar ) Metric Learning Perspective https://drive.google.com/file/d/1kDedrnO4N2l9RATSXRS0FuAZqW1mHPWu/view One-Shot Learn with metric 60, 000 color images of size 84 × 84 with 100 classes NOT LEARN CLASS LEARNS metrics https://arxiv.org/pdf/1606.04080.pdf
  • 34. N-Shot elif (few labels) if (data size is small) and not (satisfied SVM): In deep learning perspective with small data N-Shot ( One shot is more familiar ) Metric Learning Perspective https://drive.google.com/file/d/1kDedrnO4N2l9RATSXRS0FuAZqW1mHPWu/view One-Shot Learn with metric
  • 35. Zero-Shot elif (no labels) if (data size is small) and not (satisfied SVM): In deep learning perspective with small data Zero-shot Metric Learning Perspective https://drive.google.com/file/d/1kDedrnO4N2l9RATSXRS0FuAZqW1mHPWu/view Zero-shot Learn with metric
  • 36. Zero-Shot elif (no labels) if (data size is small) and not (satisfied SVM): In deep learning perspective with small data Zero-shot Metric Learning Perspective https://drive.google.com/file/d/1kDedrnO4N2l9RATSXRS0FuAZqW1mHPWu/view Zero-shot Learn with metric
  • 37. Domain Adaptation elif (no labels) if (data size is small) and not (satisfied SVM): In deep learning perspective with small data No Domain Adaptation
  • 38. Domain Adaptation elif (no labels) if (data size is small) and not (satisfied SVM): In deep learning perspective with small data Class = Backpack Amazon DSLR Webcam Caltech  Training
  • 39. Domain Adaptation elif (no labels) if (data size is small) and not (satisfied SVM): In deep learning perspective with small data Domain-Adversarial Neural Network Classifier의 성능을 유지하면서 (Classifier) source || target feature 분포도 고려, Source에서 왔는지 Target에서 왔는지 알 수 없도록 방해 (GAN)
  • 40. Domain Adaptation elif (no labels) if (data size is small) and not (satisfied SVM): In deep learning perspective with small data Domain-Adversarial Neural Network
  • 41. Domain Adaptation elif (no labels) if (data size is small) and not (satisfied SVM): In deep learning perspective with small data Domain Separation Networks 이미지 복원값 shared encoder와 차이classification loss 2개 difference를 비슷하게 만들어줌
  • 42. Domain Adaptation elif (no labels) if (data size is small) and not (satisfied SVM): In deep learning perspective with small data Domain Separation Networks Source-only : Training with only source data Target-only : Training with only Target data Testing on target data SVHN GTSRBMNIST
  • 43. Anomaly Detection (Novelty Detection) elif (skewed labels) and (special case) if (data size is small) and not (satisfied SVM): In deep learning perspective with small data https://www.youtube.com/watch?v=hHHmWmJG9Rw It’s look like Zero – shot learn
  • 44. Anomaly Detection (Novelty Detection) elif (skewed labels) and (special case) if (data size is small) and not (satisfied SVM): In deep learning perspective with small data https://www.datascience.com/blog/python- anomaly-detection Gaussian Distribution -> Check Uncertainty!!!
  • 45. Anomaly Detection (Novelty Detection) elif (skewed labels) and (special case) if (data size is small) and not (satisfied SVM): In deep learning perspective with small data safe visual navigation via deep learning https://www.slideshare.net/samchoi7/modeling-uncertainty-in-deep-learning
  • 46. Training Skills elif (skewed labels) if (data size is small) and not (satisfied SVM): Model weight update with balance https://arxiv.org/pdf/1710.05381.pdf : imbalance class effect Stratified Sampling / Bootstrapping... / K-Fold...
  • 47. Training Skills elif (skewed labels) if (data size is small) and not (satisfied SVM): Don’t see accuracy Or...
  • 49. Above Things... • Transfer Learning • http://ruder.io/transfer-learning/ • One-Shot/Zero-Shot Learning • http://bair.berkeley.edu/blog/2017/07/18/learning-to-learn/ • Uncertainty Deep Learning • https://www.slideshare.net/samchoi7/modeling-uncertainty- in-deep-learning • Why Transfer Learning reduces require data? • https://medium.com/nanonets/nanonets-how-to-use-deep- learning-when-you-have-limited-data-f68c0b512cab