SlideShare a Scribd company logo
Once-for-All: Train One Network and
Specialize it for Efficient Deployment
[ICLR 2020]
2022. 03. 20. (Sun)
Presented by: 김동현
w/ Fundamental Team: 김채현, 박종익, 양현모, 이근배, 이재윤, 송헌
1
Contents
● Problem and Approach
● Key Challenge
● How to Train Once-for-all Network
● How to Deploy Once-for-all Network
● Evaluations
● Discussions
● Conclusion
2
Contents
● Problem and Approach
● Key Challenge
● How to Train Once-for-all Network
● How to Deploy Once-for-all Network
● Evaluations
● Discussions
● Conclusion
3
Main Problem to Solve
● There are various hardware platforms to deploy DNN models.
○ Survey says there are 23.14 billion IoT devices until 2018.
○ The devices have different resource constraints;
It is impossible to deploy the same model to all devices.
● The optimal neural network architecture varies by deployment environments
(e.g., #arithmetic units, application requirements).
4
Main Problem to Solve
● It is computationally prohibitive to find all the optimal architecture by training
on each environment.
● Then, how is it possible to cost-efficiently find the specialized model on
each platform?
5
target latency
= 20ms
Suggested Approach
● Train a Once-for-all(OFA) network, which enables serving on various
environment without additional training.
○ Various scales of sub-networks (about 1019
) are available from one OFA network.
○ Each hardware can find the specialized model for its requirements (e.g, latency).
6
Key Challenges for Once-for-All Network
Requirements
1. The sub-network architecture should be part of the largest network.
2. Sub-networks should share parameters with larger networks.
3. Optimal model architecture for specified hardwares should be easily found.
7
Key Challenges for Once-for-All Network
Requirements
1. The sub-network architecture should be part of the largest network.
2. Sub-networks should share parameters with larger networks.
3. Optimal model architecture for specified hardwares should be easily found.
Challenges
1. How to design sub-network architecture space based on a OFA network.
2. How to let sub-networks share parameters with larger networks.
3. How to select the optimal model for the hardware (in terms of latency,
accuracy).
8
Contents
● Problem and Approach
● Key Challenge
● How to Train Once-for-all Network : Challenges #1, #2
● How to Deploy Once-for-all Network: Challenges #3
● Evaluations
● Discussions
● Conclusion
9
Q&A
10
● Assumption: Follow the common practice of CNN models (e.g., ResNet).
○ A model consists of groups of Layers (i.e., units).
● Architecture Search Space
○ # Layers(L): the depth of each unit is chosen from {2, 3, 4}
○ # Channels(C): expansion ratio in each layer is chosen from {3, 4, 6}
○ Kernel Size(Ks): {3, 5, 7}
○ Input Dimension: ranges from 128 to 224 with a stride
● Num available sub-networks: ((3 * 3)2
+ (3 * 3)3
+ (3 * 3)4
)5
= about 1019
Training OFA Network - Network Architecture
… … …
…
L1 L2 L3
C
…
Ks
# units
11
How sub-networks share parameters:
● Elastic Kernel Size
○ Merely sharing the parameters of larger kernel can affect the performance.
○ When changing kernel size, pass through Transform Matrix:
■ For each layer, hold parameters for elastic kernels.
● # 25*25 parameters for 7x7 -> 5x5.
● # 9*9 parameters for 5x5 -> 3x3.
● E.g., 5x5 kernel = (Center of 7x7) * Transform Matrix
Training OFA Network - Sharing Parameters
12
How sub-networks share parameters:
● Elastic Depth (= #Layers)
○ The first D layers are shared when L layers exist in a unit.
○ Simpler depth settings compared to selecting random layers from L layers.
Training OFA Network - Sharing Parameters
L D
13
How sub-networks share parameters:
● Elastic Width (= #Channels)
○ For the given expansion ratio, select channels through a channel sorting method:
1. Calculate L1 Norm for each channel’s weights.
2. Sort the channels by the L1 Norm order.
3. Choose the top-K channels.
Training OFA Network - Sharing Parameters
L1 Norm
14
Progressive Shrinking
1. Train a full model (i.e. max vaule for each configuration).
● With the trained full-size model, Knowledge-Distillation techniques are leveraged.
● Note: Full model != Best model
Training OFA Network - Training Process
… … …
…
L1 L2 L3
Note1: Input image size is randomly chosen for each training batch
15
Progressive Shrinking
1. Train a full model (i.e. max vaule for each configuration).
2. Sample sub-networks varying kernel sizes and fine-tune.
a. For each step, sample one sub-net with different kernel sizes.
b. Calculate Loss. Loss = Full model loss * KD_raio + sub-net loss
c. Update the weights (updating sub-net’s weight -> updating the full model’s weight)
Training OFA Network - Training Process
… … …
L1 L2 L3 16
Note1: Input image size is randomly chosen for each training batch
Progressive Shrinking
1. Train a full model (i.e. max vaule for each configuration).
2. Sample sub-networks varying kernel sizes and fine-tune.
3. Sample sub-networks varying depth and fine-tune.
4. Sample sub-networks varying channel expansion ratio and fine-tune.
Training OFA Network - Training Process
… … …
L1 L2 L3
Note2: Refer to Appendix B for impl. details of progressive shrinking
Note1: Input image size is randomly chosen for each training batch
17
Deploying Specialized Model w/ OFA Network
Problem:
● derive the specialized sub-network for a given deployment scenario (e.g.,
latency constraints).
Solution:
● Train an accuracy predictor (3-layer FFNN)
○ f(architecture, input image size) => accuracy
○ randomly sample 16K sub-networks, measure the accuracy on 10K validation images
● Latency Lookup Table (Details in the ProxylessNAS paper)
○ On each hardware platform, build a latency lookup table .
● Conduct an evolutionary search leveraging the above information.
○ Mutate from the known sub-network by sampling and predicting the performance.
○ add the mutated sub-network to the child pool if it satisfies the constraint (latency).
18
Q&A
19
Evaluation
● ImageNet Dataset
● Eval on Various Hardware Platforms:
○ Samsung S7 Edge, Note8, Note10, Google Pixel1, Pixel2, LG G8, NVIDIA 1080Ti, V100
GPUs, Jetson TX2, Intel Xeon CPU, Xilinx ZU9EG, and ZU3EG FPGAs
● Please refer to the paper for the detailed training configurations.
20
Evaluation
Performance of sub-networks on ImageNet
● top-1 accuracy under 224x224 resolution.
● Can achieve higher performance through Progressive Shrinking.
○ 74.8% top1 accuracy (D=4, W=3, K=3), which is on par with MobileNetV3-Large.
○ Without PS, it achieves 71.5%, which is 3.3% lower.
21
get the same architecture from
the full model w/o PS
Evaluation
Reduced Design Cost
● reports comparison between OFA and hardware-aware NAS methods
○ NAS: The design cost is linear to the number of deployment scenarios (N).
○ the total CO2 emissions of OFA is:
■ 16× fewer than ProxylessNAS
■ 19× fewer than FBNet
■ 1,300× fewer than MnasNet
22
Evaluation
OFA under Different Computational Resource Constraints
● Better accuracy under the same constraints:
○ (Left): MACs, (Right): Latency
○ Achieves higher accuracy, Requires lower computations
○ Better than “OFA - Train from scratch”, which is trained from the scratch without pretraining.
23
Discussions
● Would it work if the same approach is applied to other models, tasks (e.g.,
Transformer, NLP)?
● The architecture search space is limited to certain models.
○ e.g. How to apply the method to models such as HRNet?
24
Conclusion
● Once-for-all(OFA) Network allows training one large model and deploying
various sub-networks without additional training.
● OFA suggests Progressive Shrinking algorithm to share and find
sub-networks, which highly reduces the design cost.
● The paper shows OFA can achieve higher performance with ImageNet
dataset.
● With a trained OFA network, optimal sub-networks can be found on various
deployment environments.
25
Q&A
26

More Related Content

What's hot

07 regularization
07 regularization07 regularization
07 regularization
Ronald Teo
 
Temporal Convolutional Networks - Dethroning RNN's for sequence modelling?
Temporal Convolutional Networks - Dethroning RNN's for sequence modelling?Temporal Convolutional Networks - Dethroning RNN's for sequence modelling?
Temporal Convolutional Networks - Dethroning RNN's for sequence modelling?
Thomas Hjelde Thoresen
 
Training Neural Networks
Training Neural NetworksTraining Neural Networks
Training Neural Networks
Databricks
 
Batch normalization presentation
Batch normalization presentationBatch normalization presentation
Batch normalization presentation
Owin Will
 
Deep residual learning for image recognition
Deep residual learning for image recognitionDeep residual learning for image recognition
Deep residual learning for image recognition
Yoonho Shin
 
Neural Networks: Self-Organizing Maps (SOM)
Neural Networks:  Self-Organizing Maps (SOM)Neural Networks:  Self-Organizing Maps (SOM)
Neural Networks: Self-Organizing Maps (SOM)
Mostafa G. M. Mostafa
 
Convolutional Neural Networks
Convolutional Neural NetworksConvolutional Neural Networks
Convolutional Neural Networks
Tianxiang Xiong
 
Feed forward ,back propagation,gradient descent
Feed forward ,back propagation,gradient descentFeed forward ,back propagation,gradient descent
Feed forward ,back propagation,gradient descent
Muhammad Rasel
 
Convolutional Neural Networks
Convolutional Neural NetworksConvolutional Neural Networks
Convolutional Neural Networks
milad abbasi
 
Challenges in Large Scale Machine Learning
Challenges in Large Scale  Machine LearningChallenges in Large Scale  Machine Learning
Challenges in Large Scale Machine Learning
Sudarsun Santhiappan
 
PR-297: Training data-efficient image transformers & distillation through att...
PR-297: Training data-efficient image transformers & distillation through att...PR-297: Training data-efficient image transformers & distillation through att...
PR-297: Training data-efficient image transformers & distillation through att...
Jinwon Lee
 
Mobilenetv1 v2 slide
Mobilenetv1 v2 slideMobilenetv1 v2 slide
Mobilenetv1 v2 slide
威智 黃
 
Introduction to Convolutional Neural Networks
Introduction to Convolutional Neural NetworksIntroduction to Convolutional Neural Networks
Introduction to Convolutional Neural Networks
Hannes Hapke
 
Cnn
CnnCnn
Intro to Deep learning - Autoencoders
Intro to Deep learning - Autoencoders Intro to Deep learning - Autoencoders
Intro to Deep learning - Autoencoders
Akash Goel
 
Cnn
CnnCnn
MobileNet - PR044
MobileNet - PR044MobileNet - PR044
MobileNet - PR044
Jinwon Lee
 
Cnn method
Cnn methodCnn method
Cnn method
AmirSajedi1
 
Generative Adversarial Networks (GAN)
Generative Adversarial Networks (GAN)Generative Adversarial Networks (GAN)
Generative Adversarial Networks (GAN)
Manohar Mukku
 
Lecture 12
Lecture 12Lecture 12
Lecture 12
Wael Sharba
 

What's hot (20)

07 regularization
07 regularization07 regularization
07 regularization
 
Temporal Convolutional Networks - Dethroning RNN's for sequence modelling?
Temporal Convolutional Networks - Dethroning RNN's for sequence modelling?Temporal Convolutional Networks - Dethroning RNN's for sequence modelling?
Temporal Convolutional Networks - Dethroning RNN's for sequence modelling?
 
Training Neural Networks
Training Neural NetworksTraining Neural Networks
Training Neural Networks
 
Batch normalization presentation
Batch normalization presentationBatch normalization presentation
Batch normalization presentation
 
Deep residual learning for image recognition
Deep residual learning for image recognitionDeep residual learning for image recognition
Deep residual learning for image recognition
 
Neural Networks: Self-Organizing Maps (SOM)
Neural Networks:  Self-Organizing Maps (SOM)Neural Networks:  Self-Organizing Maps (SOM)
Neural Networks: Self-Organizing Maps (SOM)
 
Convolutional Neural Networks
Convolutional Neural NetworksConvolutional Neural Networks
Convolutional Neural Networks
 
Feed forward ,back propagation,gradient descent
Feed forward ,back propagation,gradient descentFeed forward ,back propagation,gradient descent
Feed forward ,back propagation,gradient descent
 
Convolutional Neural Networks
Convolutional Neural NetworksConvolutional Neural Networks
Convolutional Neural Networks
 
Challenges in Large Scale Machine Learning
Challenges in Large Scale  Machine LearningChallenges in Large Scale  Machine Learning
Challenges in Large Scale Machine Learning
 
PR-297: Training data-efficient image transformers & distillation through att...
PR-297: Training data-efficient image transformers & distillation through att...PR-297: Training data-efficient image transformers & distillation through att...
PR-297: Training data-efficient image transformers & distillation through att...
 
Mobilenetv1 v2 slide
Mobilenetv1 v2 slideMobilenetv1 v2 slide
Mobilenetv1 v2 slide
 
Introduction to Convolutional Neural Networks
Introduction to Convolutional Neural NetworksIntroduction to Convolutional Neural Networks
Introduction to Convolutional Neural Networks
 
Cnn
CnnCnn
Cnn
 
Intro to Deep learning - Autoencoders
Intro to Deep learning - Autoencoders Intro to Deep learning - Autoencoders
Intro to Deep learning - Autoencoders
 
Cnn
CnnCnn
Cnn
 
MobileNet - PR044
MobileNet - PR044MobileNet - PR044
MobileNet - PR044
 
Cnn method
Cnn methodCnn method
Cnn method
 
Generative Adversarial Networks (GAN)
Generative Adversarial Networks (GAN)Generative Adversarial Networks (GAN)
Generative Adversarial Networks (GAN)
 
Lecture 12
Lecture 12Lecture 12
Lecture 12
 

Similar to Once-for-All: Train One Network and Specialize it for Efficient Deployment

Tutorial-on-DNN-09A-Co-design-Sparsity.pdf
Tutorial-on-DNN-09A-Co-design-Sparsity.pdfTutorial-on-DNN-09A-Co-design-Sparsity.pdf
Tutorial-on-DNN-09A-Co-design-Sparsity.pdf
Duy-Hieu Bui
 
B.tech_project_ppt.pptx
B.tech_project_ppt.pptxB.tech_project_ppt.pptx
B.tech_project_ppt.pptx
supratikmondal6
 
Multicore architectures
Multicore architecturesMulticore architectures
Multicore architectures
Muhammet SOYTÜRK
 
PR-144: SqueezeNext: Hardware-Aware Neural Network Design
PR-144: SqueezeNext: Hardware-Aware Neural Network DesignPR-144: SqueezeNext: Hardware-Aware Neural Network Design
PR-144: SqueezeNext: Hardware-Aware Neural Network Design
Jinwon Lee
 
Standardising the compressed representation of neural networks
Standardising the compressed representation of neural networksStandardising the compressed representation of neural networks
Standardising the compressed representation of neural networks
Förderverein Technische Fakultät
 
Cvpr 2018 papers review (efficient computing)
Cvpr 2018 papers review (efficient computing)Cvpr 2018 papers review (efficient computing)
Cvpr 2018 papers review (efficient computing)
DonghyunKang12
 
(Im2col)accelerating deep neural networks on low power heterogeneous architec...
(Im2col)accelerating deep neural networks on low power heterogeneous architec...(Im2col)accelerating deep neural networks on low power heterogeneous architec...
(Im2col)accelerating deep neural networks on low power heterogeneous architec...
Bomm Kim
 
intro-to-cnn-April_2020.pptx
intro-to-cnn-April_2020.pptxintro-to-cnn-April_2020.pptx
intro-to-cnn-April_2020.pptx
ssuser3aa461
 
Deep Learning for Computer Vision: Memory usage and computational considerati...
Deep Learning for Computer Vision: Memory usage and computational considerati...Deep Learning for Computer Vision: Memory usage and computational considerati...
Deep Learning for Computer Vision: Memory usage and computational considerati...
Universitat Politècnica de Catalunya
 
Convolutional Neural Networks : Popular Architectures
Convolutional Neural Networks : Popular ArchitecturesConvolutional Neural Networks : Popular Architectures
Convolutional Neural Networks : Popular Architectures
ananth
 
Lightweight DNN Processor Design (based on NVDLA)
Lightweight DNN Processor Design (based on NVDLA)Lightweight DNN Processor Design (based on NVDLA)
Lightweight DNN Processor Design (based on NVDLA)
Shien-Chun Luo
 
PR243: Designing Network Design Spaces
PR243: Designing Network Design SpacesPR243: Designing Network Design Spaces
PR243: Designing Network Design Spaces
Jinwon Lee
 
Unit 1
Unit 1Unit 1
Unit 1
sasi
 
Netflix machine learning
Netflix machine learningNetflix machine learning
Netflix machine learning
Amer Ather
 
Deep Learning Initiative @ NECSTLab
Deep Learning Initiative @ NECSTLabDeep Learning Initiative @ NECSTLab
Deep Learning Initiative @ NECSTLab
NECST Lab @ Politecnico di Milano
 
"Quantizing Deep Networks for Efficient Inference at the Edge," a Presentatio...
"Quantizing Deep Networks for Efficient Inference at the Edge," a Presentatio..."Quantizing Deep Networks for Efficient Inference at the Edge," a Presentatio...
"Quantizing Deep Networks for Efficient Inference at the Edge," a Presentatio...
Edge AI and Vision Alliance
 
Modern Convolutional Neural Network techniques for image segmentation
Modern Convolutional Neural Network techniques for image segmentationModern Convolutional Neural Network techniques for image segmentation
Modern Convolutional Neural Network techniques for image segmentation
Gioele Ciaparrone
 
Introduction to computer vision
Introduction to computer visionIntroduction to computer vision
Introduction to computer vision
Marcin Jedyk
 

Similar to Once-for-All: Train One Network and Specialize it for Efficient Deployment (20)

Tutorial-on-DNN-09A-Co-design-Sparsity.pdf
Tutorial-on-DNN-09A-Co-design-Sparsity.pdfTutorial-on-DNN-09A-Co-design-Sparsity.pdf
Tutorial-on-DNN-09A-Co-design-Sparsity.pdf
 
B.tech_project_ppt.pptx
B.tech_project_ppt.pptxB.tech_project_ppt.pptx
B.tech_project_ppt.pptx
 
Multicore architectures
Multicore architecturesMulticore architectures
Multicore architectures
 
PR-144: SqueezeNext: Hardware-Aware Neural Network Design
PR-144: SqueezeNext: Hardware-Aware Neural Network DesignPR-144: SqueezeNext: Hardware-Aware Neural Network Design
PR-144: SqueezeNext: Hardware-Aware Neural Network Design
 
Standardising the compressed representation of neural networks
Standardising the compressed representation of neural networksStandardising the compressed representation of neural networks
Standardising the compressed representation of neural networks
 
Cvpr 2018 papers review (efficient computing)
Cvpr 2018 papers review (efficient computing)Cvpr 2018 papers review (efficient computing)
Cvpr 2018 papers review (efficient computing)
 
(Im2col)accelerating deep neural networks on low power heterogeneous architec...
(Im2col)accelerating deep neural networks on low power heterogeneous architec...(Im2col)accelerating deep neural networks on low power heterogeneous architec...
(Im2col)accelerating deep neural networks on low power heterogeneous architec...
 
Clustering
ClusteringClustering
Clustering
 
intro-to-cnn-April_2020.pptx
intro-to-cnn-April_2020.pptxintro-to-cnn-April_2020.pptx
intro-to-cnn-April_2020.pptx
 
Deep Learning for Computer Vision: Memory usage and computational considerati...
Deep Learning for Computer Vision: Memory usage and computational considerati...Deep Learning for Computer Vision: Memory usage and computational considerati...
Deep Learning for Computer Vision: Memory usage and computational considerati...
 
Convolutional Neural Networks : Popular Architectures
Convolutional Neural Networks : Popular ArchitecturesConvolutional Neural Networks : Popular Architectures
Convolutional Neural Networks : Popular Architectures
 
Lightweight DNN Processor Design (based on NVDLA)
Lightweight DNN Processor Design (based on NVDLA)Lightweight DNN Processor Design (based on NVDLA)
Lightweight DNN Processor Design (based on NVDLA)
 
PR243: Designing Network Design Spaces
PR243: Designing Network Design SpacesPR243: Designing Network Design Spaces
PR243: Designing Network Design Spaces
 
Unit 1
Unit 1Unit 1
Unit 1
 
Netflix machine learning
Netflix machine learningNetflix machine learning
Netflix machine learning
 
Deep Learning Initiative @ NECSTLab
Deep Learning Initiative @ NECSTLabDeep Learning Initiative @ NECSTLab
Deep Learning Initiative @ NECSTLab
 
"Quantizing Deep Networks for Efficient Inference at the Edge," a Presentatio...
"Quantizing Deep Networks for Efficient Inference at the Edge," a Presentatio..."Quantizing Deep Networks for Efficient Inference at the Edge," a Presentatio...
"Quantizing Deep Networks for Efficient Inference at the Edge," a Presentatio...
 
Modern Convolutional Neural Network techniques for image segmentation
Modern Convolutional Neural Network techniques for image segmentationModern Convolutional Neural Network techniques for image segmentation
Modern Convolutional Neural Network techniques for image segmentation
 
Introduction to computer vision
Introduction to computer visionIntroduction to computer vision
Introduction to computer vision
 
VGG.pptx
VGG.pptxVGG.pptx
VGG.pptx
 

More from taeseon ryu

VoxelNet
VoxelNetVoxelNet
VoxelNet
taeseon ryu
 
OpineSum Entailment-based self-training for abstractive opinion summarization...
OpineSum Entailment-based self-training for abstractive opinion summarization...OpineSum Entailment-based self-training for abstractive opinion summarization...
OpineSum Entailment-based self-training for abstractive opinion summarization...
taeseon ryu
 
3D Gaussian Splatting
3D Gaussian Splatting3D Gaussian Splatting
3D Gaussian Splatting
taeseon ryu
 
JetsonTX2 Python
 JetsonTX2 Python  JetsonTX2 Python
JetsonTX2 Python
taeseon ryu
 
Hyperbolic Image Embedding.pptx
Hyperbolic  Image Embedding.pptxHyperbolic  Image Embedding.pptx
Hyperbolic Image Embedding.pptx
taeseon ryu
 
MCSE_Multimodal Contrastive Learning of Sentence Embeddings_변현정
MCSE_Multimodal Contrastive Learning of Sentence Embeddings_변현정MCSE_Multimodal Contrastive Learning of Sentence Embeddings_변현정
MCSE_Multimodal Contrastive Learning of Sentence Embeddings_변현정
taeseon ryu
 
LLaMA Open and Efficient Foundation Language Models - 230528.pdf
LLaMA Open and Efficient Foundation Language Models - 230528.pdfLLaMA Open and Efficient Foundation Language Models - 230528.pdf
LLaMA Open and Efficient Foundation Language Models - 230528.pdf
taeseon ryu
 
YOLO V6
YOLO V6YOLO V6
YOLO V6
taeseon ryu
 
Dataset Distillation by Matching Training Trajectories
Dataset Distillation by Matching Training Trajectories Dataset Distillation by Matching Training Trajectories
Dataset Distillation by Matching Training Trajectories
taeseon ryu
 
RL_UpsideDown
RL_UpsideDownRL_UpsideDown
RL_UpsideDown
taeseon ryu
 
Packed Levitated Marker for Entity and Relation Extraction
Packed Levitated Marker for Entity and Relation ExtractionPacked Levitated Marker for Entity and Relation Extraction
Packed Levitated Marker for Entity and Relation Extraction
taeseon ryu
 
MOReL: Model-Based Offline Reinforcement Learning
MOReL: Model-Based Offline Reinforcement LearningMOReL: Model-Based Offline Reinforcement Learning
MOReL: Model-Based Offline Reinforcement Learning
taeseon ryu
 
Scaling Instruction-Finetuned Language Models
Scaling Instruction-Finetuned Language ModelsScaling Instruction-Finetuned Language Models
Scaling Instruction-Finetuned Language Models
taeseon ryu
 
Visual prompt tuning
Visual prompt tuningVisual prompt tuning
Visual prompt tuning
taeseon ryu
 
mPLUG
mPLUGmPLUG
variBAD, A Very Good Method for Bayes-Adaptive Deep RL via Meta-Learning.pdf
variBAD, A Very Good Method for Bayes-Adaptive Deep RL via Meta-Learning.pdfvariBAD, A Very Good Method for Bayes-Adaptive Deep RL via Meta-Learning.pdf
variBAD, A Very Good Method for Bayes-Adaptive Deep RL via Meta-Learning.pdf
taeseon ryu
 
Reinforced Genetic Algorithm Learning For Optimizing Computation Graphs.pdf
Reinforced Genetic Algorithm Learning For Optimizing Computation Graphs.pdfReinforced Genetic Algorithm Learning For Optimizing Computation Graphs.pdf
Reinforced Genetic Algorithm Learning For Optimizing Computation Graphs.pdf
taeseon ryu
 
The Forward-Forward Algorithm
The Forward-Forward AlgorithmThe Forward-Forward Algorithm
The Forward-Forward Algorithm
taeseon ryu
 
Towards Robust and Reproducible Active Learning using Neural Networks
Towards Robust and Reproducible Active Learning using Neural NetworksTowards Robust and Reproducible Active Learning using Neural Networks
Towards Robust and Reproducible Active Learning using Neural Networks
taeseon ryu
 
BRIO: Bringing Order to Abstractive Summarization
BRIO: Bringing Order to Abstractive SummarizationBRIO: Bringing Order to Abstractive Summarization
BRIO: Bringing Order to Abstractive Summarization
taeseon ryu
 

More from taeseon ryu (20)

VoxelNet
VoxelNetVoxelNet
VoxelNet
 
OpineSum Entailment-based self-training for abstractive opinion summarization...
OpineSum Entailment-based self-training for abstractive opinion summarization...OpineSum Entailment-based self-training for abstractive opinion summarization...
OpineSum Entailment-based self-training for abstractive opinion summarization...
 
3D Gaussian Splatting
3D Gaussian Splatting3D Gaussian Splatting
3D Gaussian Splatting
 
JetsonTX2 Python
 JetsonTX2 Python  JetsonTX2 Python
JetsonTX2 Python
 
Hyperbolic Image Embedding.pptx
Hyperbolic  Image Embedding.pptxHyperbolic  Image Embedding.pptx
Hyperbolic Image Embedding.pptx
 
MCSE_Multimodal Contrastive Learning of Sentence Embeddings_변현정
MCSE_Multimodal Contrastive Learning of Sentence Embeddings_변현정MCSE_Multimodal Contrastive Learning of Sentence Embeddings_변현정
MCSE_Multimodal Contrastive Learning of Sentence Embeddings_변현정
 
LLaMA Open and Efficient Foundation Language Models - 230528.pdf
LLaMA Open and Efficient Foundation Language Models - 230528.pdfLLaMA Open and Efficient Foundation Language Models - 230528.pdf
LLaMA Open and Efficient Foundation Language Models - 230528.pdf
 
YOLO V6
YOLO V6YOLO V6
YOLO V6
 
Dataset Distillation by Matching Training Trajectories
Dataset Distillation by Matching Training Trajectories Dataset Distillation by Matching Training Trajectories
Dataset Distillation by Matching Training Trajectories
 
RL_UpsideDown
RL_UpsideDownRL_UpsideDown
RL_UpsideDown
 
Packed Levitated Marker for Entity and Relation Extraction
Packed Levitated Marker for Entity and Relation ExtractionPacked Levitated Marker for Entity and Relation Extraction
Packed Levitated Marker for Entity and Relation Extraction
 
MOReL: Model-Based Offline Reinforcement Learning
MOReL: Model-Based Offline Reinforcement LearningMOReL: Model-Based Offline Reinforcement Learning
MOReL: Model-Based Offline Reinforcement Learning
 
Scaling Instruction-Finetuned Language Models
Scaling Instruction-Finetuned Language ModelsScaling Instruction-Finetuned Language Models
Scaling Instruction-Finetuned Language Models
 
Visual prompt tuning
Visual prompt tuningVisual prompt tuning
Visual prompt tuning
 
mPLUG
mPLUGmPLUG
mPLUG
 
variBAD, A Very Good Method for Bayes-Adaptive Deep RL via Meta-Learning.pdf
variBAD, A Very Good Method for Bayes-Adaptive Deep RL via Meta-Learning.pdfvariBAD, A Very Good Method for Bayes-Adaptive Deep RL via Meta-Learning.pdf
variBAD, A Very Good Method for Bayes-Adaptive Deep RL via Meta-Learning.pdf
 
Reinforced Genetic Algorithm Learning For Optimizing Computation Graphs.pdf
Reinforced Genetic Algorithm Learning For Optimizing Computation Graphs.pdfReinforced Genetic Algorithm Learning For Optimizing Computation Graphs.pdf
Reinforced Genetic Algorithm Learning For Optimizing Computation Graphs.pdf
 
The Forward-Forward Algorithm
The Forward-Forward AlgorithmThe Forward-Forward Algorithm
The Forward-Forward Algorithm
 
Towards Robust and Reproducible Active Learning using Neural Networks
Towards Robust and Reproducible Active Learning using Neural NetworksTowards Robust and Reproducible Active Learning using Neural Networks
Towards Robust and Reproducible Active Learning using Neural Networks
 
BRIO: Bringing Order to Abstractive Summarization
BRIO: Bringing Order to Abstractive SummarizationBRIO: Bringing Order to Abstractive Summarization
BRIO: Bringing Order to Abstractive Summarization
 

Recently uploaded

1.Seydhcuxhxyxhccuuxuxyxyxmisolids 2019.pptx
1.Seydhcuxhxyxhccuuxuxyxyxmisolids 2019.pptx1.Seydhcuxhxyxhccuuxuxyxyxmisolids 2019.pptx
1.Seydhcuxhxyxhccuuxuxyxyxmisolids 2019.pptx
Tiktokethiodaily
 
Q1’2024 Update: MYCI’s Leap Year Rebound
Q1’2024 Update: MYCI’s Leap Year ReboundQ1’2024 Update: MYCI’s Leap Year Rebound
Q1’2024 Update: MYCI’s Leap Year Rebound
Oppotus
 
standardisation of garbhpala offhgfffghh
standardisation of garbhpala offhgfffghhstandardisation of garbhpala offhgfffghh
standardisation of garbhpala offhgfffghh
ArpitMalhotra16
 
Jpolillo Amazon PPC - Bid Optimization Sample
Jpolillo Amazon PPC - Bid Optimization SampleJpolillo Amazon PPC - Bid Optimization Sample
Jpolillo Amazon PPC - Bid Optimization Sample
James Polillo
 
一比一原版(ArtEZ毕业证)ArtEZ艺术学院毕业证成绩单
一比一原版(ArtEZ毕业证)ArtEZ艺术学院毕业证成绩单一比一原版(ArtEZ毕业证)ArtEZ艺术学院毕业证成绩单
一比一原版(ArtEZ毕业证)ArtEZ艺术学院毕业证成绩单
vcaxypu
 
哪里卖(usq毕业证书)南昆士兰大学毕业证研究生文凭证书托福证书原版一模一样
哪里卖(usq毕业证书)南昆士兰大学毕业证研究生文凭证书托福证书原版一模一样哪里卖(usq毕业证书)南昆士兰大学毕业证研究生文凭证书托福证书原版一模一样
哪里卖(usq毕业证书)南昆士兰大学毕业证研究生文凭证书托福证书原版一模一样
axoqas
 
一比一原版(YU毕业证)约克大学毕业证成绩单
一比一原版(YU毕业证)约克大学毕业证成绩单一比一原版(YU毕业证)约克大学毕业证成绩单
一比一原版(YU毕业证)约克大学毕业证成绩单
enxupq
 
一比一原版(CBU毕业证)卡普顿大学毕业证成绩单
一比一原版(CBU毕业证)卡普顿大学毕业证成绩单一比一原版(CBU毕业证)卡普顿大学毕业证成绩单
一比一原版(CBU毕业证)卡普顿大学毕业证成绩单
nscud
 
SOCRadar Germany 2024 Threat Landscape Report
SOCRadar Germany 2024 Threat Landscape ReportSOCRadar Germany 2024 Threat Landscape Report
SOCRadar Germany 2024 Threat Landscape Report
SOCRadar
 
一比一原版(NYU毕业证)纽约大学毕业证成绩单
一比一原版(NYU毕业证)纽约大学毕业证成绩单一比一原版(NYU毕业证)纽约大学毕业证成绩单
一比一原版(NYU毕业证)纽约大学毕业证成绩单
ewymefz
 
做(mqu毕业证书)麦考瑞大学毕业证硕士文凭证书学费发票原版一模一样
做(mqu毕业证书)麦考瑞大学毕业证硕士文凭证书学费发票原版一模一样做(mqu毕业证书)麦考瑞大学毕业证硕士文凭证书学费发票原版一模一样
做(mqu毕业证书)麦考瑞大学毕业证硕士文凭证书学费发票原版一模一样
axoqas
 
FP Growth Algorithm and its Applications
FP Growth Algorithm and its ApplicationsFP Growth Algorithm and its Applications
FP Growth Algorithm and its Applications
MaleehaSheikh2
 
Predicting Product Ad Campaign Performance: A Data Analysis Project Presentation
Predicting Product Ad Campaign Performance: A Data Analysis Project PresentationPredicting Product Ad Campaign Performance: A Data Analysis Project Presentation
Predicting Product Ad Campaign Performance: A Data Analysis Project Presentation
Boston Institute of Analytics
 
Opendatabay - Open Data Marketplace.pptx
Opendatabay - Open Data Marketplace.pptxOpendatabay - Open Data Marketplace.pptx
Opendatabay - Open Data Marketplace.pptx
Opendatabay
 
一比一原版(UPenn毕业证)宾夕法尼亚大学毕业证成绩单
一比一原版(UPenn毕业证)宾夕法尼亚大学毕业证成绩单一比一原版(UPenn毕业证)宾夕法尼亚大学毕业证成绩单
一比一原版(UPenn毕业证)宾夕法尼亚大学毕业证成绩单
ewymefz
 
Algorithmic optimizations for Dynamic Levelwise PageRank (from STICD) : SHORT...
Algorithmic optimizations for Dynamic Levelwise PageRank (from STICD) : SHORT...Algorithmic optimizations for Dynamic Levelwise PageRank (from STICD) : SHORT...
Algorithmic optimizations for Dynamic Levelwise PageRank (from STICD) : SHORT...
Subhajit Sahu
 
一比一原版(UofM毕业证)明尼苏达大学毕业证成绩单
一比一原版(UofM毕业证)明尼苏达大学毕业证成绩单一比一原版(UofM毕业证)明尼苏达大学毕业证成绩单
一比一原版(UofM毕业证)明尼苏达大学毕业证成绩单
ewymefz
 
Empowering Data Analytics Ecosystem.pptx
Empowering Data Analytics Ecosystem.pptxEmpowering Data Analytics Ecosystem.pptx
Empowering Data Analytics Ecosystem.pptx
benishzehra469
 
Innovative Methods in Media and Communication Research by Sebastian Kubitschk...
Innovative Methods in Media and Communication Research by Sebastian Kubitschk...Innovative Methods in Media and Communication Research by Sebastian Kubitschk...
Innovative Methods in Media and Communication Research by Sebastian Kubitschk...
correoyaya
 
Investigate & Recover / StarCompliance.io / Crypto_Crimes
Investigate & Recover / StarCompliance.io / Crypto_CrimesInvestigate & Recover / StarCompliance.io / Crypto_Crimes
Investigate & Recover / StarCompliance.io / Crypto_Crimes
StarCompliance.io
 

Recently uploaded (20)

1.Seydhcuxhxyxhccuuxuxyxyxmisolids 2019.pptx
1.Seydhcuxhxyxhccuuxuxyxyxmisolids 2019.pptx1.Seydhcuxhxyxhccuuxuxyxyxmisolids 2019.pptx
1.Seydhcuxhxyxhccuuxuxyxyxmisolids 2019.pptx
 
Q1’2024 Update: MYCI’s Leap Year Rebound
Q1’2024 Update: MYCI’s Leap Year ReboundQ1’2024 Update: MYCI’s Leap Year Rebound
Q1’2024 Update: MYCI’s Leap Year Rebound
 
standardisation of garbhpala offhgfffghh
standardisation of garbhpala offhgfffghhstandardisation of garbhpala offhgfffghh
standardisation of garbhpala offhgfffghh
 
Jpolillo Amazon PPC - Bid Optimization Sample
Jpolillo Amazon PPC - Bid Optimization SampleJpolillo Amazon PPC - Bid Optimization Sample
Jpolillo Amazon PPC - Bid Optimization Sample
 
一比一原版(ArtEZ毕业证)ArtEZ艺术学院毕业证成绩单
一比一原版(ArtEZ毕业证)ArtEZ艺术学院毕业证成绩单一比一原版(ArtEZ毕业证)ArtEZ艺术学院毕业证成绩单
一比一原版(ArtEZ毕业证)ArtEZ艺术学院毕业证成绩单
 
哪里卖(usq毕业证书)南昆士兰大学毕业证研究生文凭证书托福证书原版一模一样
哪里卖(usq毕业证书)南昆士兰大学毕业证研究生文凭证书托福证书原版一模一样哪里卖(usq毕业证书)南昆士兰大学毕业证研究生文凭证书托福证书原版一模一样
哪里卖(usq毕业证书)南昆士兰大学毕业证研究生文凭证书托福证书原版一模一样
 
一比一原版(YU毕业证)约克大学毕业证成绩单
一比一原版(YU毕业证)约克大学毕业证成绩单一比一原版(YU毕业证)约克大学毕业证成绩单
一比一原版(YU毕业证)约克大学毕业证成绩单
 
一比一原版(CBU毕业证)卡普顿大学毕业证成绩单
一比一原版(CBU毕业证)卡普顿大学毕业证成绩单一比一原版(CBU毕业证)卡普顿大学毕业证成绩单
一比一原版(CBU毕业证)卡普顿大学毕业证成绩单
 
SOCRadar Germany 2024 Threat Landscape Report
SOCRadar Germany 2024 Threat Landscape ReportSOCRadar Germany 2024 Threat Landscape Report
SOCRadar Germany 2024 Threat Landscape Report
 
一比一原版(NYU毕业证)纽约大学毕业证成绩单
一比一原版(NYU毕业证)纽约大学毕业证成绩单一比一原版(NYU毕业证)纽约大学毕业证成绩单
一比一原版(NYU毕业证)纽约大学毕业证成绩单
 
做(mqu毕业证书)麦考瑞大学毕业证硕士文凭证书学费发票原版一模一样
做(mqu毕业证书)麦考瑞大学毕业证硕士文凭证书学费发票原版一模一样做(mqu毕业证书)麦考瑞大学毕业证硕士文凭证书学费发票原版一模一样
做(mqu毕业证书)麦考瑞大学毕业证硕士文凭证书学费发票原版一模一样
 
FP Growth Algorithm and its Applications
FP Growth Algorithm and its ApplicationsFP Growth Algorithm and its Applications
FP Growth Algorithm and its Applications
 
Predicting Product Ad Campaign Performance: A Data Analysis Project Presentation
Predicting Product Ad Campaign Performance: A Data Analysis Project PresentationPredicting Product Ad Campaign Performance: A Data Analysis Project Presentation
Predicting Product Ad Campaign Performance: A Data Analysis Project Presentation
 
Opendatabay - Open Data Marketplace.pptx
Opendatabay - Open Data Marketplace.pptxOpendatabay - Open Data Marketplace.pptx
Opendatabay - Open Data Marketplace.pptx
 
一比一原版(UPenn毕业证)宾夕法尼亚大学毕业证成绩单
一比一原版(UPenn毕业证)宾夕法尼亚大学毕业证成绩单一比一原版(UPenn毕业证)宾夕法尼亚大学毕业证成绩单
一比一原版(UPenn毕业证)宾夕法尼亚大学毕业证成绩单
 
Algorithmic optimizations for Dynamic Levelwise PageRank (from STICD) : SHORT...
Algorithmic optimizations for Dynamic Levelwise PageRank (from STICD) : SHORT...Algorithmic optimizations for Dynamic Levelwise PageRank (from STICD) : SHORT...
Algorithmic optimizations for Dynamic Levelwise PageRank (from STICD) : SHORT...
 
一比一原版(UofM毕业证)明尼苏达大学毕业证成绩单
一比一原版(UofM毕业证)明尼苏达大学毕业证成绩单一比一原版(UofM毕业证)明尼苏达大学毕业证成绩单
一比一原版(UofM毕业证)明尼苏达大学毕业证成绩单
 
Empowering Data Analytics Ecosystem.pptx
Empowering Data Analytics Ecosystem.pptxEmpowering Data Analytics Ecosystem.pptx
Empowering Data Analytics Ecosystem.pptx
 
Innovative Methods in Media and Communication Research by Sebastian Kubitschk...
Innovative Methods in Media and Communication Research by Sebastian Kubitschk...Innovative Methods in Media and Communication Research by Sebastian Kubitschk...
Innovative Methods in Media and Communication Research by Sebastian Kubitschk...
 
Investigate & Recover / StarCompliance.io / Crypto_Crimes
Investigate & Recover / StarCompliance.io / Crypto_CrimesInvestigate & Recover / StarCompliance.io / Crypto_Crimes
Investigate & Recover / StarCompliance.io / Crypto_Crimes
 

Once-for-All: Train One Network and Specialize it for Efficient Deployment

  • 1. Once-for-All: Train One Network and Specialize it for Efficient Deployment [ICLR 2020] 2022. 03. 20. (Sun) Presented by: 김동현 w/ Fundamental Team: 김채현, 박종익, 양현모, 이근배, 이재윤, 송헌 1
  • 2. Contents ● Problem and Approach ● Key Challenge ● How to Train Once-for-all Network ● How to Deploy Once-for-all Network ● Evaluations ● Discussions ● Conclusion 2
  • 3. Contents ● Problem and Approach ● Key Challenge ● How to Train Once-for-all Network ● How to Deploy Once-for-all Network ● Evaluations ● Discussions ● Conclusion 3
  • 4. Main Problem to Solve ● There are various hardware platforms to deploy DNN models. ○ Survey says there are 23.14 billion IoT devices until 2018. ○ The devices have different resource constraints; It is impossible to deploy the same model to all devices. ● The optimal neural network architecture varies by deployment environments (e.g., #arithmetic units, application requirements). 4
  • 5. Main Problem to Solve ● It is computationally prohibitive to find all the optimal architecture by training on each environment. ● Then, how is it possible to cost-efficiently find the specialized model on each platform? 5 target latency = 20ms
  • 6. Suggested Approach ● Train a Once-for-all(OFA) network, which enables serving on various environment without additional training. ○ Various scales of sub-networks (about 1019 ) are available from one OFA network. ○ Each hardware can find the specialized model for its requirements (e.g, latency). 6
  • 7. Key Challenges for Once-for-All Network Requirements 1. The sub-network architecture should be part of the largest network. 2. Sub-networks should share parameters with larger networks. 3. Optimal model architecture for specified hardwares should be easily found. 7
  • 8. Key Challenges for Once-for-All Network Requirements 1. The sub-network architecture should be part of the largest network. 2. Sub-networks should share parameters with larger networks. 3. Optimal model architecture for specified hardwares should be easily found. Challenges 1. How to design sub-network architecture space based on a OFA network. 2. How to let sub-networks share parameters with larger networks. 3. How to select the optimal model for the hardware (in terms of latency, accuracy). 8
  • 9. Contents ● Problem and Approach ● Key Challenge ● How to Train Once-for-all Network : Challenges #1, #2 ● How to Deploy Once-for-all Network: Challenges #3 ● Evaluations ● Discussions ● Conclusion 9
  • 11. ● Assumption: Follow the common practice of CNN models (e.g., ResNet). ○ A model consists of groups of Layers (i.e., units). ● Architecture Search Space ○ # Layers(L): the depth of each unit is chosen from {2, 3, 4} ○ # Channels(C): expansion ratio in each layer is chosen from {3, 4, 6} ○ Kernel Size(Ks): {3, 5, 7} ○ Input Dimension: ranges from 128 to 224 with a stride ● Num available sub-networks: ((3 * 3)2 + (3 * 3)3 + (3 * 3)4 )5 = about 1019 Training OFA Network - Network Architecture … … … … L1 L2 L3 C … Ks # units 11
  • 12. How sub-networks share parameters: ● Elastic Kernel Size ○ Merely sharing the parameters of larger kernel can affect the performance. ○ When changing kernel size, pass through Transform Matrix: ■ For each layer, hold parameters for elastic kernels. ● # 25*25 parameters for 7x7 -> 5x5. ● # 9*9 parameters for 5x5 -> 3x3. ● E.g., 5x5 kernel = (Center of 7x7) * Transform Matrix Training OFA Network - Sharing Parameters 12
  • 13. How sub-networks share parameters: ● Elastic Depth (= #Layers) ○ The first D layers are shared when L layers exist in a unit. ○ Simpler depth settings compared to selecting random layers from L layers. Training OFA Network - Sharing Parameters L D 13
  • 14. How sub-networks share parameters: ● Elastic Width (= #Channels) ○ For the given expansion ratio, select channels through a channel sorting method: 1. Calculate L1 Norm for each channel’s weights. 2. Sort the channels by the L1 Norm order. 3. Choose the top-K channels. Training OFA Network - Sharing Parameters L1 Norm 14
  • 15. Progressive Shrinking 1. Train a full model (i.e. max vaule for each configuration). ● With the trained full-size model, Knowledge-Distillation techniques are leveraged. ● Note: Full model != Best model Training OFA Network - Training Process … … … … L1 L2 L3 Note1: Input image size is randomly chosen for each training batch 15
  • 16. Progressive Shrinking 1. Train a full model (i.e. max vaule for each configuration). 2. Sample sub-networks varying kernel sizes and fine-tune. a. For each step, sample one sub-net with different kernel sizes. b. Calculate Loss. Loss = Full model loss * KD_raio + sub-net loss c. Update the weights (updating sub-net’s weight -> updating the full model’s weight) Training OFA Network - Training Process … … … L1 L2 L3 16 Note1: Input image size is randomly chosen for each training batch
  • 17. Progressive Shrinking 1. Train a full model (i.e. max vaule for each configuration). 2. Sample sub-networks varying kernel sizes and fine-tune. 3. Sample sub-networks varying depth and fine-tune. 4. Sample sub-networks varying channel expansion ratio and fine-tune. Training OFA Network - Training Process … … … L1 L2 L3 Note2: Refer to Appendix B for impl. details of progressive shrinking Note1: Input image size is randomly chosen for each training batch 17
  • 18. Deploying Specialized Model w/ OFA Network Problem: ● derive the specialized sub-network for a given deployment scenario (e.g., latency constraints). Solution: ● Train an accuracy predictor (3-layer FFNN) ○ f(architecture, input image size) => accuracy ○ randomly sample 16K sub-networks, measure the accuracy on 10K validation images ● Latency Lookup Table (Details in the ProxylessNAS paper) ○ On each hardware platform, build a latency lookup table . ● Conduct an evolutionary search leveraging the above information. ○ Mutate from the known sub-network by sampling and predicting the performance. ○ add the mutated sub-network to the child pool if it satisfies the constraint (latency). 18
  • 20. Evaluation ● ImageNet Dataset ● Eval on Various Hardware Platforms: ○ Samsung S7 Edge, Note8, Note10, Google Pixel1, Pixel2, LG G8, NVIDIA 1080Ti, V100 GPUs, Jetson TX2, Intel Xeon CPU, Xilinx ZU9EG, and ZU3EG FPGAs ● Please refer to the paper for the detailed training configurations. 20
  • 21. Evaluation Performance of sub-networks on ImageNet ● top-1 accuracy under 224x224 resolution. ● Can achieve higher performance through Progressive Shrinking. ○ 74.8% top1 accuracy (D=4, W=3, K=3), which is on par with MobileNetV3-Large. ○ Without PS, it achieves 71.5%, which is 3.3% lower. 21 get the same architecture from the full model w/o PS
  • 22. Evaluation Reduced Design Cost ● reports comparison between OFA and hardware-aware NAS methods ○ NAS: The design cost is linear to the number of deployment scenarios (N). ○ the total CO2 emissions of OFA is: ■ 16× fewer than ProxylessNAS ■ 19× fewer than FBNet ■ 1,300× fewer than MnasNet 22
  • 23. Evaluation OFA under Different Computational Resource Constraints ● Better accuracy under the same constraints: ○ (Left): MACs, (Right): Latency ○ Achieves higher accuracy, Requires lower computations ○ Better than “OFA - Train from scratch”, which is trained from the scratch without pretraining. 23
  • 24. Discussions ● Would it work if the same approach is applied to other models, tasks (e.g., Transformer, NLP)? ● The architecture search space is limited to certain models. ○ e.g. How to apply the method to models such as HRNet? 24
  • 25. Conclusion ● Once-for-all(OFA) Network allows training one large model and deploying various sub-networks without additional training. ● OFA suggests Progressive Shrinking algorithm to share and find sub-networks, which highly reduces the design cost. ● The paper shows OFA can achieve higher performance with ImageNet dataset. ● With a trained OFA network, optimal sub-networks can be found on various deployment environments. 25