Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Gatsby kaken-2017-pfn okanohara

2,148 views

Published on

AI in real world Automobile, Robotics, Bio/Healthcare and Art Creation

Published in: Technology
  • Be the first to comment

Gatsby kaken-2017-pfn okanohara

  1. 1. AI in real world Automobile, Robotics, Bio/Healthcare and Art Creation Daisuke Okanohara Preferred Networks hillbig@preferred.jp May. 11 2017@Gatsby-Kaken Joint Workshop
  2. 2. Preferred Networks (PFN)  “Make everything intelligent and collaborative”  Founded : March. 2014 (Founder:Toru Nishikawa (CEO), Daisuke Okanohara (EVP))  Office: Tokyo, San Mateo  Employees: ~80 (doubles every year)  Investors: FANUC, Toyota, NTT 2
  3. 3. Preferred Networks’ positioning in AI: Industrial IoT 3 Consumer Industrial Cloud Device Infrastructure Factory Robot Automotive Healthcare Smart City Industry4.0 Industrial Edge-side
  4. 4. Automobile
  5. 5. Robotics
  6. 6. Anomaly Detection
  7. 7. Example: FANUC Reducer Anomaly Detection [Presented at iREX 2015] 7 Anomaly detection using deep generative models No anomaly Found anomalies Normal Anomaly Actual sensor data from reducers
  8. 8. Can predict the failure much earlier than the existing methods We heavily use deep generative models to detect anomalies Deep learning based methods 異常スコア Detect 40 days before the failure Threshold Existing methods Elapsed time Detect just before the failure Robot failure Robot failure 15日前
  9. 9. Life Science
  10. 10. The National Cancer Center in Japan and Preferred Networks start collaborative research in deep learning
  11. 11. Accuracy for Breast Cancer Diagnosis 90% 99% 80%Mammography SOTA Liquid Biopsy SOTA Liquid Biopsy with Deep Learning
  12. 12. Art Creator
  13. 13. Random sampling of images using GAN [2015] 13
  14. 14. PaintsChainer (#PaintsChainer)  GAN training. U-Net + Super-resolution  Released Jan. 2017, and already painted about one million line images  Much cooler newer version will be released soon http://free-illustrations.gatag.net/2014/01/10/220000.html
  15. 15. PaintsChainer  Tweet from @munashihc
  16. 16. Technologies
  17. 17. Chainer : Flexible deep learning framework  https://github.com/pfnet/chainer  113 contributors  2,473 stars & 639 fork  8,804 commits  Active development & release — v1.0.0 (June 2015) to v1.23.0 (May 2017) 17 Original developer Seiya Tokui
  18. 18. ChainerRL: deep reinforcement learning library [2016]  Implements various SOTA deep RL algorithms — User can quickly try Atari 2600 and openAI gym tasks Yasuhisa Fujita
  19. 19. To process this huge amount of data, we need to apply parallel computing to deep learning
  20. 20. ChainerMN Scalable Trainining of Deep Learning Model ChainerMN developer Takuya Akiba
  21. 21. Scaling Result for CNTK, MXNet, TensorFlow and Chainer
  22. 22. Validation Accuracy against # of GPUs
  23. 23. 23 Future AI needs 100Exa ~ 1Zeta flops 1E〜100E Flops 1TB /car / day 10~1000 cars, 100days Life Science Speech Rec. Robotics/Drone 10P〜 Flops 5000 hours of speech, 0.1 miliion of generated speech [Baidu 2015] 100P 〜 1E Flops 10M SNPs per person. 100PF for 1million, 1EF for100 million. 10P(Image) 〜 10E(Video) Flops 100million images, Image Video Rec. 1E〜100E Flops 1TB/device/year 1million ~ 100 million devices Autonomous Driving 10PF 100EF100PF 1EF 10EF P:Peta E:Exa F:Flops Machine generated data is much bigger than human generated data These estimation is based on; To finish training using 1GB within 1day require 1Tflops
  24. 24. Computing Infrastructure  Current PFN’s infrastructure — >1000 GPUs, ~ 10PFlops, connected by InfiniBand in 2Q 2017 — Still not enough for current R&D demand  Unsupervised learning, learning from Video, RL  We are developing a new chip specialized for DL ops — Super power-efficient chip enable ~1 Peta DL ops per 1Chip — Plan to build a cluster capable of 1 Exa DL ops by 2019  Since brain has 1 Zeta Flops*1, we require more resource — We expect to have such a cluster by 2034 — This is optimistic, but expect several new technology will emerge 24 *1 http://timdettmers.com/2015/07/27/brain-vs-deep-learning-singularity/
  25. 25. Semi-supervised Learning Virtually Adversarial Training [arxiv:1704.03976]  SoTA of semi-supervised learning on CIFAR-10, SVHN Takeru Miyato * CIFAR-10, SVHNを含んだ実験結果は投稿準備中
  26. 26. IMSAT(VAT) [Hu and Miyato 17] IMSAT: VAT + Information Maximization Criterion Unsup. Discrete Coding SoTA on Unsup. Clusteirng and Hash Learning Result during 2016 summer internship
  27. 27. Conclusion and Future Work  Recognition to planning, controlling, and creation — Deep learning was first used in recognition tasks but now used for many different tasks  Future Work — Increase data and computing resources significantly (x1000) ?  Generate high-volume data in real world (use robotics?)  New hardware and networks achieving 1 Zeta flops — Interpretability and controllability of AI systems in critical tasks — A new way to accumulate these obtained knowledges  New language, and communication for machines (and human) — We can learn a lot from brain research

×