Deep Learning Paper Implementation
From Scratch – Part 1
PyTorch KR DEVCON 2019
1
Jaewon Lee
(visionNoob )
covering joint work with:
Martin Hwang, Chanhee Jeong
PyTorch KR Tutorial Competition 2018 – runner-up presentation
Martin
Hwang(dhhwang89@gmail.com)
전자부품연구원
Bio
visionNoob(insurgent92@gmail.com)
강원대학교
Chanhee Jeong(chjeong530@gmail.com)
모두의 연구소
DeepBaksu_Vision
Team
딥러닝을 공부하는
청년 백수모임
Object Detection for ALL
3
PyTorch KR Tutorial Competition 2018
-> “DeepBakSu Vision”
https://github.com/PyTorchKR/Tutorial-Competition-2018
머신러닝/딥러닝 논문의 핵심 내용을 잘 이해해서 설명하고,
그것을 PyTorch 코드로 간결하게 잘 구현한 튜토리얼을 작성해 보는 행사
Team
4
여러분은 딥러닝을 어떻게
공부하시나요?
5
Goodfellow, Ian, et al. Deep learning. Vol. 1. Cambridge: MIT press, 2016.
책를 읽는다
딥러닝을 배우고 싶은데 어떻게 해야 하나요?
6Stanford CS231n - https://youtu.be/h7iBpEHGVNc
수업을 듣는다
딥러닝을 배우고 싶은데 어떻게 해야 하나요?
딥러닝 관련 온라인 동영상 강의 모음 (Vision & A.I. study) https://v-ais.github.io/study/2018/09/27/Data01/
PyTorch Zero To All - https://youtu.be/SKq-pmkekTk
7
Stanford CS231n - https://youtu.be/h7iBpEHGVNc
수업을 듣는다
http://www.quickmeme.com/Andrew-ng
8
https://arxiv.org/abs/1703.06870
딥러닝을 배우고 싶은데 어떻게 해야 하나요?
논문를 읽는다
9
다양한 대회에 참여한다
Hackathon
Competition
Challenges
10
다양한 방법이 있지만…
11
오늘 나눌 이야기는요
밑바닥부터 시작하는 딥러닝 논문 구현
slides will be available online
with
Part1 – 논문을 이해한다는 것
12
What is truly understanding
Deep Learning (models)?
There is a difference between
knowing the path and walking the path
13
There is a difference between
knowing the path and walking the path
14
Understanding Theoretically Understanding Empirically
15
What is truly understanding Deep Learning (models)?
Understanding Theoretically,
Understanding Empirically
이론을 잘 이해하고
구현도 할 줄 알아야 한다
16
What is truly understanding Deep Learning (models)?
Understanding Theoretically,
Understanding Empirically
이론을 잘 이해하고
구현도 할 줄 알아야 한다
밑바닥부터 시작하는 딥러닝 논문 구현
with
대안은?
17
모두를 위한 Object Detection
딥러닝 기반의 Object Detection 모델을
PyTorch로 밑바닥부터 구현해보고
그 경험을 사람들에게 널리 알리자!
Github : https://github.com/DeepBaksuVision
Gitbook : https://deepbaksuvision.github.io/Modu_ObjectDetection/
프로젝트명
18
https://towardsdatascience.com/is-google-tensorflow-object-detection-api-the-easiest-way-to-implement-image-recognition-a8bd1f500ea0
저희는 Object Detection
19
20LIU, Li, et al. Deep learning for generic object detection: A survey. arXiv preprint arXiv:1809.02165, 2018.
21
https://github.com/hoya012/deep_learning_object_detection
정말 많습니다.
22
밑바닥부터 딥러닝 모델을
구현하는 것은 정말 어렵습니다 
고통스럽습니다.
23
버그, 버그, 그리고 버그
24
25
26
27
28
29
30
31
32
33
34
35
논문에서 말하지 않는 것들
36
37
𝝀 𝒄𝒐𝒐𝒓𝒅
𝒊=𝟎
𝑺 𝟐
𝒋=𝟎
𝑩
𝟏𝒊𝒋
𝒐𝒃𝒋
𝒘𝒊 − 𝒘𝒊
𝟐
+ 𝒉𝒊 − 𝒉𝒊
𝟐
𝝀 𝒄𝒐𝒐𝒓𝒅
𝒊=𝟎
𝑺 𝟐
𝒋=𝟎
𝑩
𝟏𝒊𝒋
𝒐𝒃𝒋
𝒘𝒊 − 𝒘𝒊
𝟐
+ 𝒉𝒊 − 𝒉𝒊
𝟐
Simple Sanity Check
for Common Mistakes in PyTorch
38
*37 Reasons why your Neural Network is not working
https://blog.slavv.com/37-reasons-why-your-neural-network-is-not-working-4020854bd607
https://twitter.com/karpathy/status/1013244313327681536
1) you didn't try to overfit a single batch first
2) you forgot to toggle train/eval mode for the net
3) you forgot to .zero_grad() (in pytorch) before .backward()
4) you passed softmaxed outputs to a loss that expects raw logits.
5) you didn't use bias=False for your Linear/Conv2d layer when using BatchNorm,
or conversely forget to include it for the output layer .
39
주의 사항
긴 호흡으로 논문을 보고
천천히 구현하는 것도 좋습니다 :D
40
밑바닥부터 시작하는 딥러닝 논문 구현
with
이게 정답입니까?
41
새해에는 우리 다 함께
밑바닥부터 시작하는 딥러닝 논문 구현
어떠신가요?
42
Appendix
유용한 꿀 팁
Common Utils
 Weights & Biases (https://www.wandb.com/)
 imgaug(https://github.com/aleju/imgaug)
 convert2Yolo(https://github.com/ssaru/convert2Yolo)
 Handling checkpoint – git hash
 Torch Summary(https://github.com/sksq96/pytorch-summary)
43
convert2Yol
o
Convert2Yolo
Object Detection annotation Convert to YOLO Darknet Format
Support DataSet : COCO, VOC, UDACITY, KITTI 2D
https://github.com/ssaru/convert2Yolo
44
Weights & Biases – Visualization toolkit
짱 좋습니다…
짱 좋습니다…
짱 좋습니다…
https://www.wandb.com/
45
https://www.wandb.com/
46
https://www.wandb.com/
47
https://www.wandb.com/
48
https://www.wandb.com/
49
Weights & Biases
https://www.wandb.com/
50
https://www.wandb.com/
51
52
import wandb
def main():
wandb.init()
# Training settings
parser.add_argument('--batch-size', type=int, default=64)
parser.add_argument('--epochs', type=int, default=10)
wandb.config.update(args)
# This magic line lets us save ther pytorch model and track all of the gradients and optionally parameters
wandb.watch(model)
def train(args, model, device, train_loader, optimizer, epoch):
for batch_idx, (data, target) in enumerate(train_loader):
## 중략 ##
# Log the images and metrics
wandb.log({ "Examples": example_images, "Test Accuracy": 100. * correct / len(test_loader.dataset),
"Test Loss": test_loss})
사용법 : 엄청 간단합니다
https://docs.wandb.com/docs/frameworks/pytorch-example.html
53
ckpt_92bfbcb_ep00001_lo5.3657_lr0.001.pth.tar
Hash code
import git
repo = git.Repo(search_parent_directories=True)
sha = repo.head.object.hexsha
short_sha = repo.git.rev_parse(sha, short=7)
print(short_sha)
Git hash code for handling checkpoints
54
Data augmentation
Imgaug : https://github.com/aleju/imgaug-doc
55
Data augmentation
Imgaug : https://github.com/aleju/imgaug-doc
Keypoint
Bounding Box
Segmentation Mask
56
Torch summary
https://github.com/sksq96/pytorch-summary
from torchsummary import summary
summary(your_model, input_size=(channels, H, W))
Usage
• pip install torchsummary or
• git clone https://github.com/sksq96/pytorch-summary
Keras style model summary!!
57
summary(model, (1, 28, 28))
>>
----------------------------------------------------------------
Layer(type) Output Shape Param #
================================================================
Conv2d-1 [-1, 10, 24, 24] 260
Conv2d-2 [-1, 20, 8, 8] 5,020
Dropout2d-3 [-1, 20, 8, 8] 0
Linear-4 [-1, 50] 16,050
Linear-5 [-1, 10] 510
================================================================
Total params: 21,840
Trainable params: 21,840
Non-trainable params: 0
----------------------------------------------------------------
Input size (MB): 0.00
Forward/backward pass size (MB): 0.06
Params size (MB): 0.08
Estimated Total Size (MB): 0.15
----------------------------------------------------------------
58
마지막으로..!
59
2019년
새해 복 많이 받으세요 :D
60
2019년에는 와 함께..!
여러분이 선택한
optimizer,
learning rate,
batch size,
그 밖의 모든 hyperparameters와 더불어
여러분의 딥러닝 모델이 잘 수렴하길 기원하겠습니다 XD
행복하세요
Inspired by https://twitter.com/reza_zadeh?lang=ko

Pytorch kr devcon