2017年12月6日に実施した、「ソニーのNeural Network Console大勉強会~何ができる?どう使う?質問しよう!~」で使用したスライド資料です。
This is a slide of "Seminar of Sony's Neural Network Console" held in 6th December.
This is a talk at AI Nextcon Seattle on Feb 12, 2020.
An overview of TensorFlow Lite and various resources for helping you deploy TFLite models to mobile and edge devices. Walk through an example of end to end on-device ML: train a model from scratch, convert to TFLite and deploy it.
The content was modified from Google Content Group
Eric ShangKuan(ericsk@google.com)
---
TensorFlow Lite guide( for mobile & IoT )
TensorFlow Lite is a set of tools to help developers run TensorFlow models on mobile, embedded, and IoT devices. It enables on-device machine learning inference with low latency and small binary size.
TensorFlow Lite consists of two main components:
The TensorFlow Lite interpreter:
- optimize models on many different hardware types, like mobile phones, embedded Linux devices, and microcontrollers.
The TensorFlow Lite converter:
- which converts TensorFlow models into an efficient form for use by the interpreter, and can introduce optimizations to improve binary size and performance.
---
Event: PyLadies TensorFlow All-Around
Date: Sep 25, 2019
Event link: https://www.meetup.com/PyLadies-Berlin/events/264205538/
Linkedin: http://linkedin.com/in/mia-chang/
TinyML: Machine Learning for MicrocontrollersRobert John
My presentation at TensorFlow User Groups Sub-Saharan Africa Summit discusses machine learning for embedded devices, the importance, and the challenges.
syzkaller is an unsupervised, coverage-guided Linux syscall fuzzer.
The presentation covers basic of operation of the fuzzer, gives tutorial on how to run it and how to extend it to fuzz new drivers.
This is a talk at AI Nextcon Seattle on Feb 12, 2020.
An overview of TensorFlow Lite and various resources for helping you deploy TFLite models to mobile and edge devices. Walk through an example of end to end on-device ML: train a model from scratch, convert to TFLite and deploy it.
The content was modified from Google Content Group
Eric ShangKuan(ericsk@google.com)
---
TensorFlow Lite guide( for mobile & IoT )
TensorFlow Lite is a set of tools to help developers run TensorFlow models on mobile, embedded, and IoT devices. It enables on-device machine learning inference with low latency and small binary size.
TensorFlow Lite consists of two main components:
The TensorFlow Lite interpreter:
- optimize models on many different hardware types, like mobile phones, embedded Linux devices, and microcontrollers.
The TensorFlow Lite converter:
- which converts TensorFlow models into an efficient form for use by the interpreter, and can introduce optimizations to improve binary size and performance.
---
Event: PyLadies TensorFlow All-Around
Date: Sep 25, 2019
Event link: https://www.meetup.com/PyLadies-Berlin/events/264205538/
Linkedin: http://linkedin.com/in/mia-chang/
TinyML: Machine Learning for MicrocontrollersRobert John
My presentation at TensorFlow User Groups Sub-Saharan Africa Summit discusses machine learning for embedded devices, the importance, and the challenges.
syzkaller is an unsupervised, coverage-guided Linux syscall fuzzer.
The presentation covers basic of operation of the fuzzer, gives tutorial on how to run it and how to extend it to fuzz new drivers.
In a series of announcements that left more than 1,200 gamers gathered in Cologne alternately breathless, giddy with laughter, and shouting their enthusiasm, Jensen Huang introduced the GeForce RTX series of gaming processors, representing the biggest leap in performance in NVIDIA’s history.
A presentation explaining the Linux & Free and Open Source software ecosystem and the various challenges it faces from a distribution editor point of view : ISV attraction, Hardware compatibility... This is a unique presentation which has been given to Canonical sales team in 2007.
ABCI: AI Bridging Cloud Infrastructure for Scalable AI/Big DataHitoshi Sato
Presentation Slides for ExaComm2018, Fourth International Workshop on Communication Architectures for HPC, Big Data, Deep Learning and Clouds at Extreme Scale, in conjunction with International Supercomputing Conference (ISC 2018)
http://nowlab.cse.ohio-state.edu/exacomm/
HKG18- 115 - Partitioning ARM Systems with the Jailhouse HypervisorLinaro
"Session ID: HKG18-115
Session Name: HKG18-115 - Partitioning ARM Systems with the Jailhouse Hypervisor
Speaker: Jan Kiszka
Track: Security
★ Session Summary ★
The open source hypervisor Jailhouse provides hard partitioning of multicore systems to co-locate multiple Linux or RTOS instances side by side. It aims at low complexity and minimal footprint to achieve deterministic behavior and enable certifications according to safety or security standards. In this session, we would like to look at the ARM-specific status of Jailhouse and discuss applications, to-dos and possible collaborations around it with the ARM community. The session is intended to be half presentation, half Q&A / discussion.
---------------------------------------------------
★ Resources ★
Event Page: http://connect.linaro.org/resource/hkg18/hkg18-115/
Presentation: http://connect.linaro.org.s3.amazonaws.com/hkg18/presentations/hkg18-115.pdf
Video: http://connect.linaro.org.s3.amazonaws.com/hkg18/videos/hkg18-115.mp4
---------------------------------------------------
★ Event Details ★
Linaro Connect Hong Kong 2018 (HKG18)
19-23 March 2018
Regal Airport Hotel Hong Kong
---------------------------------------------------
Keyword: Security
'http://www.linaro.org'
'http://connect.linaro.org'
---------------------------------------------------
Follow us on Social Media
https://www.facebook.com/LinaroOrg
https://www.youtube.com/user/linaroorg?sub_confirmation=1
https://www.linkedin.com/company/1026961"
Getting Ready to Use Redis with Apache Spark with Dvir VolkSpark Summit
Getting Ready to use Redis with Apache Spark is a technical tutorial designed to address integrating Redis with an Apache Spark deployment to increase the performance of serving complex decision models. To set the context for the session, we start with a quick introduction to Redis and the capabilities Redis provides. We cover the basic data types provided by Redis and cover the module system. Using an ad serving use-case, we look at how Redis can improve the performance and reduce the cost of using complex ML-models in production. Attendees will be guided through the key steps of setting up and integrating Redis with Spark, including how to train a model using Spark then load and serve it using Redis, as well as how to work with the Spark Redis module. The capabilities of the Redis Machine Learning Module (redis-ml) will be discussed focusing primarily on decision trees and regression (linear and logistic) with code examples to demonstrate how to use these feature. At the end of the session, developers should feel confident building a prototype/proof-of-concept application using Redis and Spark. Attendees will understand how Redis complements Spark and how to use Redis to serve complex, ML-models with high performance.
Simplifying AI Infrastructure: Lessons in Scaling on DGX SystemsRenee Yao
Simplifying AI Infrastructure: Lessons in Scaling on DGX Systems, the world's most powerful AI Systems. This is a presentation I did at GTC Israel in 2018
D2L Brightspace Vendor Integrations: Technology and TerminologyD2L Barry
Presentation at 2019 D2L Connection at Normandale CC on April 5, 2019
D2L Brightspace Vendor Integrations: Technology and Terminology- Jonathan Werth, Minnesota State Colleges and Universities System Office
Software update for embedded systems - elce2014Stefano Babic
Nowadays updating an embedded system is a mandatory feature. Not only due to security reasons, but bug fixes and new features are available after the release of a product, and in many cases an update
must be done in field. My presentation will show advantages and disadvantages for different ways for updating (using a bootloader, rescue system, etc.), taking into account reliability typical for embedded. The second part of the presentation will cover the OSS Project
"SWupdate", that I started some months ago, to provide a ready-to-use environment for updating, both local and in field, and mainly how this project can be used with Yocto.
In a series of announcements that left more than 1,200 gamers gathered in Cologne alternately breathless, giddy with laughter, and shouting their enthusiasm, Jensen Huang introduced the GeForce RTX series of gaming processors, representing the biggest leap in performance in NVIDIA’s history.
A presentation explaining the Linux & Free and Open Source software ecosystem and the various challenges it faces from a distribution editor point of view : ISV attraction, Hardware compatibility... This is a unique presentation which has been given to Canonical sales team in 2007.
ABCI: AI Bridging Cloud Infrastructure for Scalable AI/Big DataHitoshi Sato
Presentation Slides for ExaComm2018, Fourth International Workshop on Communication Architectures for HPC, Big Data, Deep Learning and Clouds at Extreme Scale, in conjunction with International Supercomputing Conference (ISC 2018)
http://nowlab.cse.ohio-state.edu/exacomm/
HKG18- 115 - Partitioning ARM Systems with the Jailhouse HypervisorLinaro
"Session ID: HKG18-115
Session Name: HKG18-115 - Partitioning ARM Systems with the Jailhouse Hypervisor
Speaker: Jan Kiszka
Track: Security
★ Session Summary ★
The open source hypervisor Jailhouse provides hard partitioning of multicore systems to co-locate multiple Linux or RTOS instances side by side. It aims at low complexity and minimal footprint to achieve deterministic behavior and enable certifications according to safety or security standards. In this session, we would like to look at the ARM-specific status of Jailhouse and discuss applications, to-dos and possible collaborations around it with the ARM community. The session is intended to be half presentation, half Q&A / discussion.
---------------------------------------------------
★ Resources ★
Event Page: http://connect.linaro.org/resource/hkg18/hkg18-115/
Presentation: http://connect.linaro.org.s3.amazonaws.com/hkg18/presentations/hkg18-115.pdf
Video: http://connect.linaro.org.s3.amazonaws.com/hkg18/videos/hkg18-115.mp4
---------------------------------------------------
★ Event Details ★
Linaro Connect Hong Kong 2018 (HKG18)
19-23 March 2018
Regal Airport Hotel Hong Kong
---------------------------------------------------
Keyword: Security
'http://www.linaro.org'
'http://connect.linaro.org'
---------------------------------------------------
Follow us on Social Media
https://www.facebook.com/LinaroOrg
https://www.youtube.com/user/linaroorg?sub_confirmation=1
https://www.linkedin.com/company/1026961"
Getting Ready to Use Redis with Apache Spark with Dvir VolkSpark Summit
Getting Ready to use Redis with Apache Spark is a technical tutorial designed to address integrating Redis with an Apache Spark deployment to increase the performance of serving complex decision models. To set the context for the session, we start with a quick introduction to Redis and the capabilities Redis provides. We cover the basic data types provided by Redis and cover the module system. Using an ad serving use-case, we look at how Redis can improve the performance and reduce the cost of using complex ML-models in production. Attendees will be guided through the key steps of setting up and integrating Redis with Spark, including how to train a model using Spark then load and serve it using Redis, as well as how to work with the Spark Redis module. The capabilities of the Redis Machine Learning Module (redis-ml) will be discussed focusing primarily on decision trees and regression (linear and logistic) with code examples to demonstrate how to use these feature. At the end of the session, developers should feel confident building a prototype/proof-of-concept application using Redis and Spark. Attendees will understand how Redis complements Spark and how to use Redis to serve complex, ML-models with high performance.
Simplifying AI Infrastructure: Lessons in Scaling on DGX SystemsRenee Yao
Simplifying AI Infrastructure: Lessons in Scaling on DGX Systems, the world's most powerful AI Systems. This is a presentation I did at GTC Israel in 2018
D2L Brightspace Vendor Integrations: Technology and TerminologyD2L Barry
Presentation at 2019 D2L Connection at Normandale CC on April 5, 2019
D2L Brightspace Vendor Integrations: Technology and Terminology- Jonathan Werth, Minnesota State Colleges and Universities System Office
Software update for embedded systems - elce2014Stefano Babic
Nowadays updating an embedded system is a mandatory feature. Not only due to security reasons, but bug fixes and new features are available after the release of a product, and in many cases an update
must be done in field. My presentation will show advantages and disadvantages for different ways for updating (using a bootloader, rescue system, etc.), taking into account reliability typical for embedded. The second part of the presentation will cover the OSS Project
"SWupdate", that I started some months ago, to provide a ready-to-use environment for updating, both local and in field, and mainly how this project can be used with Yocto.
2017年12月12日、13日に開催された「GPU TECHNOLOGY CONFERENCE」で使用したスライド資料です。
This is a slide of when it used in "GPU TECHNOLOGY CONFERENCE" held in 12 and 13 of December.
21. 21
学習済ニューラルネットワークを利用する方法は3通り
• Neural Network Libraries Pythonコードからの実行 おすすめ
• Neural Network LibrariesのCLI(Python利用)からの実行 簡単
• Neural Network Libraries C++からの実行 コンパクトに製品搭載する際に
• https://github.com/sony/nnabla/tree/master/examples/cpp/mnist_runtime
python "(path of Neural Network Console)/libs/nnabla/python/src/nnabla/utils/cli/cli.py" forward
-c Network definition file included in the training result folder (net.nntxt)
-p Parameter file included in the training result folder (parameters.h5)
-d Dataset CSV file of input data
-o Inference result output folder
1. Neural Network Console上で推論に用いるネットワークを右クリックして、Export、Python Code
(NNabla)を選択
2. 学習結果のparamters.h5を、load_parametersコマンドで読み込み
import nnabla as nn
nn.load_parameters('./parameters.h5')
3. 2によりパラメータが読み込まれた状態で、1でExportされたネットワークを実行(forward)