We've updated our privacy policy. Click here to review the details. Tap here to review the details.
Activate your 30 day free trial to unlock unlimited reading.
Activate your 30 day free trial to continue reading.
Download to read offline
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2022/06/tensorflow-lite-for-microcontrollers-tflm-recent-developments-a-presentation-from-bdti-and-google/
David Davis, Senior Embedded Software Engineer, and John Withers, Automation and Systems Engineer, both of BDTI, present the “TensorFlow Lite for Microcontrollers (TFLM): Recent Developments” tutorial at the May 2022 Embedded Vision Summit.
TensorFlow Lite Micro (TFLM) is a generic inference framework designed to run TensorFlow models on digital signal processors (DSPs), microcontrollers and other embedded targets with small memory footprints and very low power usage. TFLM aims to be easily portable to various embedded targets from those running an RTOS to bare-metal code. TFLM leverages the model optimization tools from the TensorFlow ecosystem and has additional embedded-specific optimizations to reduce the memory footprint. TFLM also integrates with a number of community contributed optimized hardware-specific kernel implementations.
In this talk, Davis and Withers review collaboration between BDTI and Google over the last year, including porting nearly two dozen operators from TensorFlow Lite to TFLM, creation of a separate Arduino examples repository, improved testing and documentation of both Arduino and Colab training examples and transitioning TFLM’s open-source CI framework to use GitHub Actions.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2022/06/tensorflow-lite-for-microcontrollers-tflm-recent-developments-a-presentation-from-bdti-and-google/
David Davis, Senior Embedded Software Engineer, and John Withers, Automation and Systems Engineer, both of BDTI, present the “TensorFlow Lite for Microcontrollers (TFLM): Recent Developments” tutorial at the May 2022 Embedded Vision Summit.
TensorFlow Lite Micro (TFLM) is a generic inference framework designed to run TensorFlow models on digital signal processors (DSPs), microcontrollers and other embedded targets with small memory footprints and very low power usage. TFLM aims to be easily portable to various embedded targets from those running an RTOS to bare-metal code. TFLM leverages the model optimization tools from the TensorFlow ecosystem and has additional embedded-specific optimizations to reduce the memory footprint. TFLM also integrates with a number of community contributed optimized hardware-specific kernel implementations.
In this talk, Davis and Withers review collaboration between BDTI and Google over the last year, including porting nearly two dozen operators from TensorFlow Lite to TFLM, creation of a separate Arduino examples repository, improved testing and documentation of both Arduino and Colab training examples and transitioning TFLM’s open-source CI framework to use GitHub Actions.
You just clipped your first slide!
Clipping is a handy way to collect important slides you want to go back to later. Now customize the name of a clipboard to store your clips.The SlideShare family just got bigger. Enjoy access to millions of ebooks, audiobooks, magazines, and more from Scribd.
Cancel anytime.Unlimited Reading
Learn faster and smarter from top experts
Unlimited Downloading
Download to take your learnings offline and on the go
You also get free access to Scribd!
Instant access to millions of ebooks, audiobooks, magazines, podcasts and more.
Read and listen offline with any device.
Free access to premium services like Tuneln, Mubi and more.
We’ve updated our privacy policy so that we are compliant with changing global privacy regulations and to provide you with insight into the limited ways in which we use your data.
You can read the details below. By accepting, you agree to the updated privacy policy.
Thank you!