2. Building a Graph
■ Start with ops that do not need any input (source ops), Constant
import tensorflow as tf
matrix1 = tf.constant([[3,3]])
matrix2 = tf.constant([[6],[6]])
product=tf.matmul(matrix1,matrix2)
print(product) # O/pTensor("MatMul_1:0", shape=(1, 1), dtype=int32)
sess=tf.Session()
result=sess.run(product)
print(result) #O/P [[36]]
sess.close()
■ You will see difference in o/p in both the print statement. First statement just tells us about the Product
tensor property . Second statement is under a session. Graph gets executed thus the print shows the
value of the tensor.
matrix1 matrix2
OP:
MatMul
result
3. Tensors
■ As we have seen how a basic tensorflow works, lets dig deeper onTensors
■ n-dimensional array
■ 0-d tensor: scalar (number) ,1-d tensor: vector 2-d, tensor: matrix etc etc.
■ tf.Variable is a class, but tf.constant is an op
4. Graphs
■ It represents computations.
import tensorflow as tf
a = tf.add(3, 5)
■ TF automatically names the nodes when you don’t explicitly name
them.
x = 3
y = 5
■ Edge: Tensors ie data
■ Node: operators, variables, and constants
■ If we simply print(a), output will be tensor object and not 8.To
fetch the value of it in a, we need to run the session.
■ The session will look at the graph, trying to think: how can I get
the value of a, then it computes all the nodes that leads to a.
x
y
Ad
d
x
y
a
Interpret
ation
3
5
Add
Session
x
y
a
3
5
Add
8
5. But why only graphs?
■ Save computation (only run subgraphs that lead to the values you want to fetch)
■ Break computation into small, differential pieces to facilitates auto-differentiation
■ Facilitate distributed computation, spread the work across multiple CPUs, GPUs, or
devices
■ Many common machine learning models are commonly taught and visualized as
directed graphs already
6. Sessions
■ A Session object encapsulates the environment in which
Operation objects are executed, andTensor objects are
evaluated.
x = 2
y = 3
add_op = tf.add(x, y)
mul_op = tf.mul(x, y)
useless = tf.mul(x, add_op)
pow_op = tf.pow(add_op, mul_op)
with tf.Session() as sess:
z = sess.run(pow_op)
Because we only want the value of pow_op and pow_op doesn’t
depend on useless, session won’t compute value of useless → save
computation
7. Operations
Its very similar to numpy. In terms of data type,TensorFlow integrates seamlessly with NumPy. If the
requested fetch is aTensor , then the output of will be a NumPy ndarray. Beware when using NumPy arrays
because NumPy andTensorFlow might become not so compatible in the future!
8. Tensor Board
■ Help a lot when you build complicated models
■ To useTensor Board
I. Go to terminal
II. run: $ python [yourprogram].py
III. $ tensorboard --logdir="./graphs" --port 6006
IV. Then open your browser and go to:
http://localhost:6006/
10. I don’t like const, How to explicitly name
them
import tensorflow as tf
matrix1 = tf.constant([[3,3]], name="a")
matrix2 = tf.constant([[6],[6]], name="b")
product=tf.matmul(matrix1,matrix2)
print(product)
sess=tf.Session()
writer = tf.summary.FileWriter("./graphs", sess.graph)
result=sess.run(product)
print(result)
sess.close()
11. Key Notes
■ Graph execution is done across all available compute resources, such as CPU or GPU. If
you have GPU,TensorFlow uses your first GPU
■ TensorFlow separates definition of computations from their execution
12. ■ Only use constants for primitive types. Constants are stored in graph’s definition.This makes
loading graphs expensive when constants are big. Use variables or readers for more data that
requires more memory.