Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Efficient Deep Learning in Communications

310 views

Published on

This presentation consist of models and explanations of deep learning, artificial intelligence and today's systems and communications. This was presented at the ITU-T Workshop on Machine Learning for 5G held at the ITU HQ in Geneva, Switzerland on 29 January 2018. More information on this workshop can be found here: https://www.itu.int/en/ITU-T/Workshops-and-Seminars/20180129/Pages/default.aspx

Published in: Technology
  • Be the first to comment

  • Be the first to like this

Efficient Deep Learning in Communications

  1. 1. Image Processing Fraunhofer Heinrich Hertz Institute Fraunhofer Heinrich Hertz Institute, Einsteinufer 37, 10587 Berlin www.hhi.fraunhofer.de Efficient Deep Learning in Communications Dr. Wojciech Samek Fraunhofer HHI, Machine Learning Group
  2. 2. © 2 Today’s AI Systems AlphaGo beats Go human champ Deep Net outperforms humans in image classification Deep Net beats human at recognizing traffic signs DeepStack beats professional poker players Computer out-plays humans in "doom" Dermatologist-level classification of skin cancer with Deep Nets Revolutionizing Radiology with Deep Learning
  3. 3. © 3 Huge volumes of data Computing power Powerful models Today’s AI Systems Communications settings are often different. - Millions of labeled examples available - huge models (up to 137 billion parameters and 1001 layers) - architectures adapted to images, speech, text … - highly parallel processing - large power consumption (600 Watts per GPU card)
  4. 4. © 4 Many additional requirements: Small size, efficient execution, low energy consumption … Satellite Communications Autonomous driving Smartphones Internet of Things 5G Networks Smart Data ML in Communications
  5. 5. © 5 Distributed setting Large nonstationarity Restricted ressources Communications costs Interoperability Security & privacy Interpretability Trustworthiness … ML in Communications We need ML techniques which are adapted to communications But it’s not only the algorithms, also: - protocols - data formats - frameworks - mechanisms - …
  6. 6. © 6 Problem 1: Restricted ressources DNN with Millions of weight parameters - large size - energy-hungry training & inference - floating point operations Many recent work on compressing neural networks by weight quantization.
  7. 7. © 7 Problem 1: Restricted resources compressed sparse row format - reduces storage - fast multiplications can we do better ? quantization (rate-distortion theory)
  8. 8. © 8 Problem 1: Restricted resources RD-theory based weight quantization does not necessarily lead to sparse matrices. Weight sharing property: Subsets of connections share the same weight value rewriting trick
  9. 9. © 9 Problem 1: Restricted resources more efficient format than CSR
  10. 10. © 10 Problem 1: Restricted resources VGG-16 size: 553 MB, acc: 68.73 %, ops: 30940 M, energy: 71 mJ Simple quantization (8 bit) size: 138 MB, acc: 68.52 %, ops: 30940 M, energy: 68 mJ Sparse format size: 773 (217) MB, acc: 68.52 %, ops: 29472 M, energy: 65 mJ iphone8 25 kJ WS format size: 247 (99) MB, acc: 68.52 %, ops: 16666 M, energy: 36 mJ State-of-the-art compression + sparse format size: 17.8 MB, acc: 68.83 %, ops: 10081 M, energy: 22 mJ State-of-the-art compression + WS format size: 12.8 MB, acc: 68.83 %, ops: 7225 M, energy: 16 mJ
  11. 11. © Black Box Problem 2: Interpretability 11 verify system understand weaknesses legal aspects learn new strategies
  12. 12. © Theoretical interpretation: (Deep) Taylor decomposition of neural network Black Box Problem 2: Interpretability 12
  13. 13. © Problem 2: Interpretability Explanation cat rooster dog ? 13
  14. 14. © 14 what speaks for / against classification as “3” what speaks for / against classification as “9” Problem 2: Interpretability
  15. 15. © 15 Problem 2: Interpretability
  16. 16. © 16 Predictions 25-32 years old 60+ years old Problem 2: Interpretability
  17. 17. © 17 Problem 2: Interpretability
  18. 18. © 18 Conclusion Bringing ML to communications comes with new challenges AI systems may behave differently than expected Need for best practices & recommendations (protocols, formats, …)
  19. 19. © 19 Thank you for your attention Questions ??? All our papers available on: http://iphome.hhi.de/samek Acknowledgement Simon Wiedemann (HHI) Klaus-Robert Müller (TUB) Grégoire Montavon (TUB) Sebastian Lapuschkin (HHI) Leila Arras (HHI) …

×