Successfully reported this slideshow.

작은 스타트업에서 머신러닝 맛보기

0

Share

Loading in …3
×
1 of 71
1 of 71

More Related Content

Related Books

Free with a 30 day trial from Scribd

See all

작은 스타트업에서 머신러닝 맛보기

  1. 1. 우리도 4차 산업하자 .
  2. 2. 뭘 해야 하나 …
  3. 3. 뭘 해야 하나 … 뭘 할줄 아나 …
  4. 4. 공개 된 자료 (2017.1월 경) • CNN (Inception V3, LeNet, VGG …) • RNN (LSTM) • GAN
  5. 5. 공개 된 자료 (2017.1월 경) • CNN (Inception V3, LeNet, VGG …) • RNN (LSTM) • GAN
  6. 6. 공개 된 자료 (2017.1월 경) • CNN (Inception V3, LeNet, VGG …) • RNN (LSTM) • GAN
  7. 7. 공개 된 자료 (2017.1월 경) • CNN (Inception V3, LeNet, VGG …) • RNN (LSTM) • GAN !
  8. 8. • •
  9. 9. classification
  10. 10. • •
  11. 11. • •
  12. 12. ? x o = visible = x x 30%
  13. 13. CNN . Multi layer perceptron tensorflow .
  14. 14. Don’t re-invent wheel • CNN • 4 (conv->conv->fc->fc) • CNN • 96% (Inception V3, VGG …) • Inception V3 48
  15. 15. Don’t re-invent wheel • CNN 4 • 70% • class a, b 20% • • Inception V3
  16. 16. • “transfer learning ” • “ 3 " • “VGG net ” • “ ” .
  17. 17. Transfer Learning
  18. 18. Transfer Learning
  19. 19. Transfer Learning https://www.tensorflow.org/tutorials/image_retraining
  20. 20. Transfer Learning https://www.tensorflow.org/tutorials/image_retraining
  21. 21. Transfer Learning softmax layer https://www.tensorflow.org/tutorials/image_retraining
  22. 22. Transfer Learning 1. tensorflow 2. 3. bazel-bin/tensorflow/examples/image_retraining/retrain --image_dir ~/flower_photos 4.
  23. 23. – . “ 3 ”
  24. 24. – . “ 3 ”
  25. 25. • 80% • Hyper-parameter tuning 85% • learning rate • learning epoch • batch size • random noise
  26. 26. • • 5 / /a/b • •
  27. 27. • 92% • 2 • 40 (cpu)
  28. 28. • 92% • 2 • 40 (cpu)
  29. 29. regression
  30. 30. • 1, 2, 3, 4, 5 • 7 • ( ) • fun predict(image): score
  31. 31. tensorflow transfer learning
  32. 32. 1 • 1, 2, 3, 4, 5 Inception V3 • noise • • ->
  33. 33. • 1, 2, 3, 4, 5 Inception V3 • noise • • -> 1 FAIL
  34. 34. Lesson from 1 Outlier overfitting classification line tight
  35. 35. 2 • Neural Net old fashioned • Inception V3 softmax FC • bottleneck • 2048 PCA 700 • 90% • outlier robust huber normalization • classification SVM
  36. 36. 2
  37. 37. 2 • ( , ) • 5 • 2 • 5 40% • 2 = 2
  38. 38. 2 • Neural Net old fashioned • Inception V3 softmax FC • bottleneck • 2048 PCA 700 • 90% • outlier robust huber normalization • classification SVM FAIL
  39. 39. Lesson from 2 1, 5 2, 3, 4
  40. 40. 3 Regression Huber norm, PCA regressor sklearn Robust regression
  41. 41. 3
  42. 42. 3 Regression Huber norm, PCA regressor sklearn Robust regression FAIL
  43. 43. Lesson from 3 • under-fitting • 3 (regression toward the mean) • 700 fitting • (train accuracy) 1.8
  44. 44. 4 Regression DNN regression loss function huber loss function class imbalance 20 [1, 2), [2, 3), [3, 4), [4, 5) 1500 under-fitting
  45. 45. 4
  46. 46. AWS AMI
  47. 47. AWS Spot 82%
  48. 48. AWS ap-northeast-2c aws ami spot instance -> instance snapshot
  49. 49. 4
  50. 50. 4 • Regression • DNN regression • loss function huber loss function • under-fitting FAIL
  51. 51. Lesson from 4 • 0.4 • over-fitting •
  52. 52. 5 augmentation + over-fitting batch-norm
  53. 53. 5
  54. 54. 5
  55. 55. 5 0.7 CNN
  56. 56. 5 0.7 CNN
  57. 57. 5
  58. 58. “ raw data”
  59. 59. “ raw data” “ ”
  60. 60. “ raw data” “ ” “ ”
  61. 61. “ raw data” “ ” “ ” “ ”
  62. 62. “ raw data” “ ” “ ” “ ” “ ”

×