Google’s TensorFlow drew considerable attention in recent months as it enabled rapid Machine Learning development. In response, Google developed TensorFlow Lite - a solution for deploying intelligent applications on mobile and embedded devices. We will show how to use TensorFlow Lite to develop an app which transforms the style of pictures using a deep neural network. Style-transferring apps can transform summer into winter, horses into zebras, or a Monet into a Van Gogh. In this talk, we see rapid development of intelligent mobile device products which may or may not have a lasting impact.
29. 29
NEURAL STYLE-TRANSFER
3) Transforming features
Content
Style
Feature
Transforms
1) Whitening
Whitening: keep only shapes
Coloring: keep the colors
Stylization: how much shape from
content and how much shape from
style picture.
EXTRACT SHAPE
30. 30
NEURAL STYLE-TRANSFER
3) Transforming features
Content
Style
Feature
Transforms
1) Whitening
2) Coloring
Whitening: keep only shapes
Coloring: keep the colors
Stylization: how much shape from
content and how much shape from
style picture.
EXTRACT COLORS
31. 31
NEURAL STYLE-TRANSFER
3) Transforming features
Content
Style
Feature
Transforms
1) Whitening
2) Coloring
3) Stylization
Whitening: keep only shapes
Coloring: keep the colors
Stylization: how much shape from
content and how much shape from
style picture.
CONTENT SHAPE/ STYLE SHAPE
35. TENSORFLOW LITE
• Lower latency, no server calls
• Works offline
• Data stays on-device
Why on-device ML?
36. TENSORFLOW LITE
• Lower latency, no server calls
• Works offline
• Data stays on-device
• Power efficient
Why on-device ML?
37. TENSORFLOW LITE
• Lower latency, no server calls
• Works offline
• Data stays on-device
• Power efficient
• All sensor data accessible on-device
Why on-device ML?
38. • Core operations tuned for mobile platforms
Highlights
TENSORFLOW LITE
39. • Core operations tuned for mobile platforms
• FlatBuffers-based model file format
Highlights
TENSORFLOW LITE
The primary benefit of FlatBuffers
comes from the fact that they can be
memory-mapped, and used directly
from disk without being loaded and
parsed. This gives much faster
startup times, and gives the
operating system the option of
loading and unloading the required
pages from the model file, instead of
killing the app when it is low on
memory.
SERIALIZATION LIBRARY FOR :
• JAVA
• C/C++
• PYTHON
• ETC.
40. • Core operations tuned for mobile platforms
• FlatBuffers-based model file format
• On-device interpreter with kernels optimised
for fast execution on mobile
Highlights
TENSORFLOW LITE
41. • Core operations tuned for mobile platforms
• FlatBuffers-based model file format
• On-device interpreter with kernels optimised
for fast execution on mobile
• Small: TF Lite interpreter < 100 kB
Highlights
TENSORFLOW LITE
42. • Core operations tuned for mobile platforms
• FlatBuffers-based model file format
• On-device interpreter with kernels optimised
for fast execution on mobile
• Small: TF Lite interpreter < 100 kB
• Java and C++ API support
Highlights
TENSORFLOW LITE
44. • Java API for convenience
Highlights
TENSORFLOW LITE
45. • Java API for convenience
• C++ API: loads model and invokes interpreter
Highlights
TENSORFLOW LITE
46. • Java API for convenience
• C++ API: loads model and invokes interpreter
• Interpreter executes model using set of
kernels
Highlights
TENSORFLOW LITE
47. The new graph converter in TF Lite is called
TensorFlow Lite Optimizing Converter aka
TOCO
How to transfer TF to
TF Lite?
TENSORFLOW LITE
How to call it in demo shortly!