Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

DIANNE - A distributed deep learning framework on OSGi - Tim Verbelen

427 views

Published on

OSGi Community Event 2016 Presentation by Tim Verbelen (iMinds / imec)

With the current explosion of IoT devices connected to the Internet, the biggest challenge in the near future is how to process and analyze all this generated data, making use of the highly distributed compute infrastructure at hand. A promising approach for data analysis is deep learning, using brain-inspired neural networks for feature extraction and detection. In our research lab, we have developed DIANNE, an OSGi-based framework for creating, deploying and training artificial neural networks in a modular way. Benefiting from OSGi modularity, we can easily distribute (parts of) the neural networks among cloud and edge devices.

Published in: Technology
  • Be the first to comment

  • Be the first to like this

DIANNE - A distributed deep learning framework on OSGi - Tim Verbelen

  1. 1. 1 Tim Verbelen – imec, Ghent Unversity DIANNE, A DISTRIBUTED DEEP LEARNING FRAMEWORK ON OSGI
  2. 2. SMART? THE FALLACIES OF THE SMART EVERYTHING... CONNECTED!
  3. 3. A TRULY SMART SYSTEM IS ABLE TO LEARN Image captioning (Vinyals et al) Speech synthesis (van den Oord et al) Speech recognition systems AlphaGo (Silver et al.) Robot tasks (Abbeel et al)
  4. 4. MACHINE LEARNING WITH DEEP NEURAL NETWORKS 4 Neural networks are trained to approximate any function Difficulties: Requires a lot of computation Requires a huge amount of data GPU / CLOUD COMPUTING INTERNET OF THINGS
  5. 5. WHERE TO DEPLOY? 5 DISTRIBUTED INFRASTRUCTURE REQUIRES DISTRIBUTED SOLUTION IoT gateway Cloud Sensor/Edge nodes End-user devices
  6. 6. IMAGE DIVIDER. POSITION THE TITLE ACCORDING TO THE IMAGE. USE THE WHITE TITLE FOR DARK IMAGES. KEY PRINCIPLES
  7. 7. MODULARITY 7 DECOUPLE INTERFACE FROM IMPLEMENTATION USING SERVICES
  8. 8. MODULARITY IN DIANNE (1) 8 MODULES ARE BASIC BUILDING BLOCKS FOR DEFINING NEURAL NETWORK
  9. 9. MODULARITY IN DIANNE (2) 9 MODULAR BUILDING BLOCKS FOR LEARNING AND EVALUATING NEURAL NETWORKS Neural Network Repository Dataset Learner Evaluator Agent
  10. 10. MODULARITY IN DIANNE (3) 10 USE THE MODULES AS LEGO BLOCKS TO BUILD YOUR NEURAL NETWORK
  11. 11. IMAGE DIVIDER. POSITION THE TITLE ACCORDING TO THE IMAGE. USE THE WHITE TITLE FOR DARK IMAGES. DEMO TOUR
  12. 12. EXAMPLE CHARTS WHY OSGI?
  13. 13. REMOTE SERVICES 13 A B RSA RSA exportServiceimportService CALLING OSGI SERVICES ON DISTRIBUTED INFRASTRUCTURE
  14. 14. 14 public class NeuralNetworkInstanceDTO { public UUID id; public String description; public String name; public Map<UUID, ModuleInstanceDTO> modules; } public interface NeuralNetwork { UUID getId(); NeuralNetworkInstanceDTO getNeuralNetworkInstance(); Promise<NeuralNetworkResult> forward(...); Promise<NeuralNetworkResult> backward(...); ... } DATA BEHAVIOR DTOSERVICE DTOS SPLITTING DATA FROM BEHAVIOR
  15. 15. DTOS (2) 15 GREAT FOR CONFIGURATION public class SGDConfig { public float learningRate = 0.01f; public float minLearningRate = 0.0f; public float decayRate = 0.0f; … } Map<String, String> config … SGDConfig c = DianneConfigHandler .getConfig(config, SGDConfig.class) SGDProcessor p = new SGDProcessor(c); How to provide configuration to objects * Constructor/method parameters? DTO? Map<String, String>? * for configuring services, use ConfigAdmin
  16. 16. NATIVE LIBRARIES 16 FRAGMENTS PROVIDE THE RIGHT NATIVE LIB FOR YOUR PLATFORM Tensors are residing only in native (GPU) memory Build a fragment with Bundle-NativeCode header for each supported platform At runtime, the right fragment resolves and is attached to the host bundle providing the Java objects with native methods Caveat: correct fragment for GPU still has to be selected manually (not automatically provided as capability)
  17. 17. OVERHEAD? 17
  18. 18. PROMISES 18 DIANNE Neural network modules are executed asynchronously NeuralNetwork provides API with promises final NeuralNetwork nn = ...; Tensor batch = loadBatch(); Tensor nextBatch; while(learning){ // do forward-backward pass during training Promise result = nn.forward(batch,…).then( p → { Tensor output = p.getValue().tensor; Tensor grad = calculateGradient(output); return nn.backward(grad); }).then(…); nextBatch = loadBatch(); // already load next batch result.getValue(); // wait for async completion … // flip nextBatch and batch }
  19. 19. CONCIERGE 19 A small footprint OSGi core implementation All demos shown during this presentation were running on Concierge Learn more tomorrow: “Getting to the Next Level with Eclipse Concierge” - Schubartsaal, 17h45
  20. 20. 21 http://dianne.intec.ugent.be/ git clone https://github.com/ibcn-cloudlet/dianne.git THANK YOU!
  21. 21. PUBLIC

×