TF2.0 is designed to improve usability and productivity. As a TF's enthusiastic user, I am very excited. Personally, I think the most important thing about usability is "how does TF provide a user-friendly API?" Aside from the other aspects in TF 2.0, this post was a quick review from an API usage perspective.
2. J. Kang Ph.D. presents
▪ GIST EEC Ph.D. (2015)
▪ Research Lead (~18. 5)
▪ MoTLab Director (18.1~ present)
▪ (18. 10~present)
NLP/Company AI (18.12~ present)
▪
▪ I recently like :
▪ Mobile Machine learning
▪ Natural Language Processing /Chatbot!!
▪ Tensorflow / Google Cloud TPU
▪ Swimming!
2
Jaewook Kang (Jae)
About me!
3. OMG! TF-KR exceeds 40,000
users!!!!
- Founded at 2016 Mar.
- TFKR is around 3 years olds!!
3
👏👏👏👏👏
4. Types of TF 2.0 APIs:
- tf keras
- tf.Estimator
- tf native (tf.*/ tf.nn)
4
5. Types of APIs
5
❖ TF 1.x has too many high level APIs. This makes
confusion.
– tf.layers: To build various types of ML layers
– tf.estimator: To train and evaluate models
• Premade Estimators: model_fn() is premaded.
• Custom Estimators: model_fn() is built by users.
6. Types of APIs
6
❖ TF 1.x has too many high level APIs. This makes
confusion.
– tf.layers: To build various types of ML layers
– tf.estimator: To train and evaluate models
• Premade Estimators: model_fn() is premaded.
• Custom Estimators: model_fn() is built by users.
7. tf.keras is now part of the core TF API
7
- tf.keras (Since v1.4 @17 Nov)
8. Types of APIs
8
❖ TF 1.x has too many high level APIs. This makes
confusion.
– tf.layers: To build various types of ML layers
– tf.estimator: To train and evaluate models
• Premade Estimators: model_fn() is premaded.
• Custom Estimators: model_fn() is built by users.
– tf.keras: Given by the Keras grammar and TF native
binding
• From easy layer definition
• To easy training and evaluation
9. tf.keras: Major API sets in TF2.0
x = tf.placeholder(tf.float32, [None,784])
y = tf.placeholder(tf.float32, [None,10])
W = tf.Variable(tf.zeros[784,10]))
b = tf.Variable(tf.zeros[10]))
pred_y = tf.nn.softmax(tf.matmul(x,W) + b )
…
with tf.Session() as sess:
sess.run(tf.initialize_all_variables())
tf.train.start_queue_runner(sess)
example_batch = tf.train.batch( [x], batch_size=10, num_threads=4,
capacity=10)
max_steps = 1000
for step in range(max_steps):
x_in = sess.run(example_batch)
sess.run(train_step, feed_dict = { x: train_data, y: train_labels})
9
Tensorflow 1.x
Coding style
slide credit: Google Brain
10. tf.keras: Major API sets in TF2.0
mnist = dataset.get(‘image_mnist’)
model = tf.keras.Input.Sequential(
[ tf.keras.layers.Input(784,),
tf.keras.layers.Dense(10, activation = tf.nn.softmax)
])
model.complie( optimizer = ‘adam’,
loss=‘sparse_categorical_crossentropy’,
distribute = tf.distribute.MirroredStrartegy())
model.fit(mnist[TRAIN][‘image’], mnist[TRAIN][‘target’], epochs = 5)
model.evaluate(mnist[EVAL][‘image’], mnist[EVAL][‘target’])
10
Tensorflow 2.0
Coding style
slide credit: Google Brain
11. tf.keras: Major API sets in TF2.0
❖ “tf.keras” fully compatible to the TF framework
– Use tf.data with tf.keras
– Export model in SavedModel
– Build your custom model by subclassing API
– Very simple distributed strategy to use TPU, MultiGPU
and Multi-nodes
11
12. Types of APIs
12
❖ How is it rearranged in Tf.2.0?
– tf.layers → tf.keras.layers
– tf.Estimator → tf.keras as well (except some premaded)
13. Types of APIs
13
❖ TF 1.x has too many high level APIs. This makes
confusion.
– tf.layers → tf.keras.layers
– tf.Estimator → tf.keras as well
image credit:
http://bonkersworld.net/backwards-
14. Types of APIs
14
❖ Don’t forget! Backward Compatibility to tf.Estimator
– Comments:
• Tf.train.optimizer incompatibility to tf.keras
• Lack of Tensorboard support
• Alternative “training and evaluation” are possible?
15. Types of APIs
15
❖ Don’t forget! Backward Compatibility to tf.Estimator
– Comments:
• Tf.train.optimizer incompatibility to tf.keras
• Lack of Tensorboard support
• Alternative “training and evaluation” are possible?
All right!! We can expect,
the “tf-upgrade-v2” script!
16. Types of APIs
16
❖ What about low-level APIs in TF 2.0 ?
– In TF2.0, you still can use low-level tf APIs like “tf.nn/tf.*”
with some changes
• 1) API Namespace change
– https://github.com/tensorflow/community/blob/master/rfcs/20180827-api-names.md
• 2) tf.Variable handling change
– https://github.com/tensorflow/community/blob/master/rfcs/20180817-variables-20.md
17. Types of APIs
17
❖ What about low-level APIs in TF 2.0 ?
– How to run without sess.run() ?
– Functions not session!: Python-style low-level coding by
Autograph
• https://github.com/tensorflow/community/blob/master/rfcs/20180918-functions-not-
sessions-20.md
• We will handle this topic on the next chapter of this slide
19. Mode options in TF2.0:
Graph mode or eager mode? otherwise mixed ?
- Eager mode (slower but interactive)
- Graph mode (faster but not-intuitive)
- Autograph ?
19
20. Graph mode
● Parallelism
● Distributed execution
● Portability
Eager mode
● An intuitive interface
● Easier debugging
● Natural control flow
def f(x):
if x > 0:
x = x * x
return x
def f(x):
def if_true():
return x * x
def if_false():
return x
x = tf.cond(
tf.greater(x, 0),
if_true,
if_false)
return xslide credit: Google Brain
21. Graph mode
● Parallelism
● Distributed execution
● Portability
Eager mode
● An intuitive
interface
● Easier debugging
● Natural control flow
def f(x):
if x > 0:
x = x * x
return x
def f(x):
def if_true():
return x * x
def if_false():
return x
x = tf.cond(
tf.greater(x, 0),
if_true,
if_false)
return xslide credit: Google Brain
22. Eager mode is default! in TF 2.0
❖ An imperative TF coding env
– Returning concrete values instead of building a
compt. graph!
22
23. Eager mode is default! in TF 2.0
❖ An imperative TF coding env
– Returning concrete values instead of building a
compt. graph!
– Purpose:
• intuitive coding
• Fast prototyping
• Easy debugging
23
24. Eager mode is default! in TF 2.0
❖ An imperative TF coding env
– Challenges:
• Losing some advantages in deployment
– Potability on various devices like GPU, TPU
– distributed training
• Losing training speed and performance optimization
24
25. Graph mode
● Parallelism
● Distributed execution
● Portability
Eager mode
● An intuitive interface
● Easier debugging
● Natural control flow
def f(x):
if x > 0:
x = x * x
return x
def f(x):
def if_true():
return x * x
def if_false():
return x
x = tf.cond(
tf.greater(x, 0),
if_true,
if_false)
return xslide credit: Google Brain
26. Graph mode
● Parallelism
● Distributed execution
● Portability
Eager mode
● An intuitive interface
● Easier debugging
● Natural control flow
def f(x):
if x > 0:
x = x * x
return x
def f(x):
def if_true():
return x * x
def if_false():
return x
x = tf.cond(
tf.greater(x, 0),
if_true,
if_false)
return xslide credit: Google Brain
● Performance
“hot spots”
● Productionize
Autograph
+
tf.function
27. Graph mode by Autograph
27
❖ Code for graph model in eager-style!
– AutoGraph takes in your eager-style Python code
and converts it to graph-generating code.
• Automatically converts python control flow expressions to tf API
expressions
• (https://medium.com/tensorflow/autograph-converts-python-into-tensorflow-graphs-
b2a871f87ec7)
28. Graph mode by Autograph
28
❖ Code for graph model in eager-style!
– AutoGraph takes in your eager-style Python code
and converts it to graph-generating code.
• Automatically converts python control flow expressions to tf API
expressions
• (https://medium.com/tensorflow/autograph-converts-python-into-tensorflow-graphs-
b2a871f87ec7)
– Changes from Autograph in TF 1.x:
• The Decorator changes
– Dont Use “@autogrph.convert()” but “@tf.function”
• Supporting customization on tf.keras.models
– add the “@tf.function” on the head of def call().
29. Graph mode by Autograph
29
❖ How to run without sess.run() ?
– Key: python function as Graph!
• when you decorate your functions using
@tf.function and call the functions,
• TF internally run the functions with tf.Session()
• More information
– https://github.com/tensorflow/community/blob/master/rf
cs/20180918-functions-not-sessions-20.md
30. Graph mode by Autograph
30
❖ How to run without sess.run() ?
– Key: python function as Graph!
import tensorflow as tf
x = tf.placeholder(tf.float32)
y = tf.square(x)
z = tf.add(x, y)
sess = tf.Session()
z0 = sess.run([z], feed_dict={x: 2.}) # 6.0
z1 = sess.run([z], feed_dict={x: 2., y: 2.}) # 4.0
Tensorflow 1.x
Coding style
31. Graph mode by Autograph
31
❖ How to run without sess.run() ?
– Key: python function as Graph!
import tensorflow as tf
@tf.function
def compute_z1(x, y):
return tf.add(x, y)
@tf.function
def compute_z0(x):
return compute_z1(x, tf.square(x))
z0 = compute_z0(2.)
z1 = compute_z1(2., 2.)
Tensorflow 2.0
Coding style
32. Graph mode by Autograph
❖ Code for graph model in eager-style!
- Challenges:
- Make debugging difficult again
- Not all iterator supported
- Not all python-built-in func supported
- Lambda functions not supported
- Dict and tuple types not supported
- For more details, please see
• https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/autograph/LI
MITATIONS.md
32
33. Graph mode by Autograph
❖ Code for graph model in eager-style!
- Comments:
- Functional duplication?
• tf.contrib.eager.defun()
• tfe.defun()
- Need more use cases !
33
34. How to use Eager and Autograph
❖ Recommendation
– Use tf.keras as much as possible
– For Customized models, recommend to code in tf.keras
custom model
• where you can use the eager and the autograph modes
– Code any your codes work for the eager and the
autograph modes both.
34
40. In my opinion…
40
❖ High-level : tf.keras.models / tf.keras.layers
❖ Low-level opt1: tf.nn/tf.*
❖ Low-level opt2: tf.keras + subclassing API
❖ Low-level opt3: python style only + autograph
41. In reality, still ...
41
❖ High-level : tf.keras.models / tf.keras.layers
❖ Low-level opt1: tf.nn/tf.*
– → Hope, this is going to be deprecated
❖ Low-level opt2: tf.keras + subclassing API
❖ Low-level opt3: python style only + autograph
– The use of “eager +autograph” can accelerate your low-level coding
– But need improvement and more support
– See
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/autograph/LIMITATIONS.md
43. Some Non-technical Comments
❖ Tensorflow is an open source but..
❖ People are waiting for Google’s coding guideline
❖ Fast changing may lose diversity
43
44. Some Questions
44
❖ Q1) Can we use .tflite format for server-based serving as
well?
– Nope !
❖ Q2) Can ckpt or savedmodel from tf1.x be compatible with tf
2.0 ?
– Ans) savedModel → YES
– Ans) ckpt → No!
❖ Q3) How to test “tf-upgrade-v2” ?
– Ans) Still need test and fix it together!