SlideShare a Scribd company logo
Introduction to TensorFlow 2
GDG Meetup SF
Google Launchpad
11/19/2019
Oswald Campesato
ocampesato@yahoo.com
Highlights/Overview
 What is TensorFlow 2?
 Major Changes in TF 2
 Working with strings/arrays/tensors
 Working with TF 2 @tf.function decorator
 Working with TF 2 generators
 Working with TF 2 tf.data.Dataset
 Datasets in TF 1.x versus TF 2
 Working with TF 2 tf.keras
 CNNs, RNNs, LSTMs, Bidirectional LSTMs
 Reinforcement Learning/TF Agents
Highlights/Overview
This session does not cover:
 “the new vision” of TF 2
 “the common practices” of TF 2
 “lessons learned” from TF 1.x
 a detailed “roadmap” from TF 1.x to TF 2
What is TensorFlow 2?
An open source framework for ML and DL
Created by Google (TF 1.x released 11/2015)
Evolved from Google Brain
Linux and Mac OS X support (VM for Windows)
Production release: 10/2019
TF home page: https://www.tensorflow.org/
TF 2 Platform Support
Tested and supported on 64-bit systems
Ubuntu 16.04 or later
Windows 7 or later
macOS 10.12.6 (Sierra) or later (no GPU support)
Raspbian 9.0 or later
TF 2 Docker Images
 https://hub.docker.com/r/tensorflow/tensorflow/
$ docker run -it --rm tensorflow/tensorflow bash
 Start a CPU-only container with Python 2:
$ docker run -it --rm --runtime=nvidia tensorflow/tensorflow:latest-
gpu-py3 python
 Start a GPU container with Python 3 & Python interpreter:
$ docker run -it --rm -v $(realpath ~/notebooks):/tf/notebooks -p
8888:8888 tensorflow/tensorflow:late
What is TensorFlow 2?
 ML/DL open source platform
 Support for Python, Java, C++
 Desktop, Server, Mobile, Web
 CPU/GPU/TPU support
 Visualization via TensorBoard
 Can be embedded in Python scripts
 Installation: pip install tensorflow
 Ex: pip install tensorflow==2.0.0-alpha0
TensorFlow Use Cases (Generic)
Image recognition
Computer vision
Voice/sound recognition
Time series analysis
Language detection
Language translation
Text-based processing
Handwriting Recognition
Major Changes in TF 2
 TF 2 Eager Execution: default mode
 @tf.function decorator: instead of tf.Session()
 AutoGraph: graph generated in @tf.function()
 Generators: used with tf.data.Dataset (TF 2)
 Differentiation: calculates gradients
 tf.GradientTape: automatic differentiation
Removed from TensorFlow 2
 tf.Session()
 tf.placeholder()
 tf.global_initializer()
 feed_dict
 Variable scopes
 tf.contrib code
 tf.flags and tf.contrib
 Global variables
 => TF 1.x functions moved to compat.v1
Upgrading Files/Directories to TF 2
 The TF 2 upgrade script:
https://www.tensorflow.org/alpha/guide/upgrade
 Script name: tf_upgrade_v2
 install TF to get the upgrade script:
pip install tensorflow (NOT pip3)
Upgrading Files/Directories to TF 2
1) upgrade one file:
tf_upgrade_v2 --infile oldtf.py --outfile newtf.py
 The preceding command creates report.txt
Upgrade an entire directory:
tf_upgrade_v2 –intree mydir --outtree newtf.py –
copyotherfiles False
NB: do NOT make manual upgrade changes
Migrating to TensorFlow 2
 1) do not manually update parts of your code
 2) script won't work with functions with reordered arguments
 3) script assumes this TF statement: "import tensorflow as tf"
 4) the script does not reorder arguments
 5) the script adds keyword arguments to functions that have their
arguments reordered.
 Upgrade Jupyter notebooks and Python files in a GitHub repo:
http://tf2up.ml/
=> Replace prefix https://github.com/ with http://tf2up.m
What is a Tensor?
TF tensors are n-dimensional arrays
TF tensors are very similar to numpy ndarrays
scalar number: a zeroth-order tensor
vector: a first-order tensor
matrix: a second-order tensor
3-dimensional array: a 3rd order tensor
https://dzone.com/articles/tensorflow-simplified-
examples
TensorFlow Eager Execution
An imperative interface to TF
Fast debugging & immediate run-time errors
=> TF 2 requires Python 3.x (not Python 2.x)
No static graphs or sessions
TensorFlow Eager Execution
integration with Python tools
Supports dynamic models + Python control flow
support for custom and higher-order gradients
=> Default mode in TensorFlow 2.0
TensorFlow Eager Execution
 import tensorflow as tf
 x = [[2.]]
 # m = tf.matmul(x, x) <= deprecated
 m = tf.linalg.matmul(x, x)
 print("m:",m)
 print("m:",m.numpy())
#Output:
#m: tf.Tensor([[4.]], shape=(1,1),dtype=float32)
#m: [[4.]]
Basic TF 2 Operations
Working with Constants
Working with Strings
Working with Tensors
Working Operators
Arrays in TF 2
Multiplying Two Arrays
Convert Python Arrays to TF Arrays
Conflicting Types
TensorFlow “primitive types”
 tf.constant:
> initialized immediately and immutable
 import tensorflow as tf
 aconst = tf.constant(3.0)
 print(aconst)
# output for TF 2:
#tf.Tensor(3.0, shape=(), dtype=float32)
# output for TF 1.x:
#Tensor("Const:0", shape=(), dtype=float32)
TensorFlow “primitive types”
tf.Variable (a class):
+ initial value is required
+ updated during training
+ in-memory buffer (saved/restored from disk)
+ can be shared in a distributed environment
+ they hold learned parameters of a model
TensorFlow Arithmetic
import tensorflow as tf # arith1.py
a = tf.add(4, 2)
b = tf.subtract(8, 6)
c = tf.multiply(a, 3)
d = tf.div(a, 6)
 print(a) # 6
 print(b) # 2
 print(c) # 18
 print(d) # 1
 print("a:",a.numpy())
 print("b:",b.numpy())
 print("c:",c.numpy())
 print("d:",d.numpy())
TensorFlow Arithmetic
 tf.Tensor(6, shape=(), dtype=int32)
 tf.Tensor(2, shape=(), dtype=int32)
 tf.Tensor(18, shape=(), dtype=int32)
 tf.Tensor(1.0, shape=(), dtype=float64)
 6
 2
 18
 1.0
TF 2 Loops and Arrays
Using “for” loops
Using “while” loops
tf.reduce_prod()
tf.reduce_sum()
A “for” Loop Example
 import tensorflow as tf
 x = tf.Variable(0, name='x')
 for i in range(5):
 x = x + 1
 print("x:",x)
A “for” Loop Example
 x: tf.Tensor(1, shape=(), dtype=int32)
 x: tf.Tensor(2, shape=(), dtype=int32)
 x: tf.Tensor(3, shape=(), dtype=int32)
 x: tf.Tensor(4, shape=(), dtype=int32)
 x: tf.Tensor(5, shape=(), dtype=int32)
A “while” Loop Example
 import tensorflow as tf
 a = tf.constant(12)
 while not tf.equal(a, 1):
 if tf.equal(a % 2, 0):
 a = a / 2
 else:
 a = 3 * a + 1
 print(a)
A “while” Loop Example
 tf.Tensor(6.0, shape=(), dtype=float64)
 tf.Tensor(3.0, shape=(), dtype=float64)
 tf.Tensor(10.0, shape=(), dtype=float64)
 tf.Tensor(5.0, shape=(), dtype=float64)
 tf.Tensor(16.0, shape=(), dtype=float64)
 tf.Tensor(8.0, shape=(), dtype=float64)
 tf.Tensor(4.0, shape=(), dtype=float64)
 tf.Tensor(2.0, shape=(), dtype=float64)
 tf.Tensor(1.0, shape=(), dtype=float64)
Useful TF 2 APIs
tf.shape()
tf.rank()
tf.range()
tf.reshape()
tf.ones()
tf.zeros()
tf.fill()
 tf.random.normal()
 tf.truncated_normal()
Some TF 2 “lazy operators”
map()
filter()
flatmap()
batch()
take()
zip()
flatten()
Combined via “method chaining”
TF 2 “lazy operators”
 filter():
 uses Boolean logic to "filter" the elements in an array to
determine which elements satisfy the Boolean condition
 map(): a projection
 this operator "applies" a lambda expression to each input
element
 flat_map():
 maps a single element of the input dataset to a Dataset of
elements
TF 2 “lazy operators”
 batch(n):
 processes a "batch" of n elements during each
iteration
 repeat(n):
 repeats its input values n times
 take(n):
 operator "takes" n input values
TF 2 tf.data.Dataset
TF “wrapper” class around data sources
Located in the tf.data.Dataset namespace
Supports lambda expressions
Supports lazy operators
They use generators instead of iterators (TF 1.x)
tf.data.Dataset.from_tensors()
 Import tensorflow as tf
 #combine the input into one element
 t1 = tf.constant([[1, 2], [3, 4]])
 ds1 = tf.data.Dataset.from_tensors(t1)
# output: [[1, 2], [3, 4]]
tf.data.Dataset.from_tensor_slices()
 Import tensorflow as tf
 #separate element for each item
 t2 = tf.constant([[1, 2], [3, 4]])
 ds1 = tf.data.Dataset.from_tensor_slices(t2)
# output: [1, 2], [3, 4]
TF 2 Datasets: code sample
 import tensorflow as tf # tf2-dataset.py
 import numpy as np
x = np.arange(0, 10)
 # create a dataset from a Numpy array
ds = tf.data.Dataset.from_tensor_slices(x)
TF 1.x Datasets: iterator
 import tensorflow as tf # tf1x-plusone.py
 import numpy as np
 x = np.arange(0, 10)
 # create dataset object from Numpy array
 ds = tf.data.Dataset.from_tensor_slices(x)
 ds.map(lambda x: x + 1)
 # create a one-shot iterator <= TF 1.x
 iterator = ds.make_one_shot_iterator()
 for i in range(10):
 val = iterator.get_next()
 print("val:",val)
TF 2 Datasets: generator
import tensorflow as tf # tf2-plusone.py
import numpy as np
x = np.arange(0, 5) # 0, 1, 2, 3, 4
def gener():
for i in x:
yield (3*i)
ds = tf.data.Dataset.from_generator(gener, (tf.int64))
for value in ds.take(len(x)):
print("1value:",value)
for value in ds.take(2*len(x)):
print("2value:",value)
TF 2 Datasets: generator
 1value: tf.Tensor(0, shape=(), dtype=int64)
 1value: tf.Tensor(3, shape=(), dtype=int64)
 1value: tf.Tensor(6, shape=(), dtype=int64)
 1value: tf.Tensor(9, shape=(), dtype=int64)
 1value: tf.Tensor(12, shape=(), dtype=int64)
 2value: tf.Tensor(0, shape=(), dtype=int64)
 2value: tf.Tensor(3, shape=(), dtype=int64)
 2value: tf.Tensor(6, shape=(), dtype=int64)
 2value: tf.Tensor(9, shape=(), dtype=int64)
 2value: tf.Tensor(12, shape=(), dtype=int64)
TF 1.x map() and take()
 import tensorflow as tf # tf 1.12.0
 tf.enable_eager_execution()
 ds = tf.data.TextLineDataset("file.txt")
 ds = ds.map(lambda line:
tf.string_split([line]).values)
 try:
 for value in ds.take(2):
 print("value:",value)
 except tf.errors.OutOfRangeError:
 pass
TF 2 map() and take(): file.txt
 this is file line #1
 this is file line #2
 this is file line #3
 this is file line #4
 this is file line #5
TF 2 map() and take(): output
 ('value:', <tf.Tensor: id=16, shape=(5,),
dtype=string, numpy=array(['this', 'is',
'file', 'line', '#1'], dtype=object)>)
 ('value:', <tf.Tensor: id=18, shape=(5,),
dtype=string, numpy=array(['this', 'is',
'file', 'line', '#2'], dtype=object)>)
TF 2 map() and take()
 import tensorflow as tf
 import numpy as np
 x = np.array([[1],[2],[3],[4]])
 # make a ds from a numpy array
 ds = tf.data.Dataset.from_tensor_slices(x)
 ds = ds.map(lambda x: x*2).map(lambda x:
x+1).map(lambda x: x**3)
 for value in ds.take(4):
 print("value:",value)
TF 2 map() and take(): output
 value: tf.Tensor([27], shape=(1,), dtype=int64)
 value: tf.Tensor([125], shape=(1,), dtype=int64)
 value: tf.Tensor([343], shape=(1,), dtype=int64)
 value: tf.Tensor([729], shape=(1,), dtype=int64)
TF 2 batch(): EXTRA CREDIT
import tensorflow as tf
import numpy as np
x = np.arange(0, 12)
def gener():
i = 0
while(i < len(x/3)):
yield (i, i+1, i+2) # three integers at a time
i += 3
ds = tf.data.Dataset.from_generator(gener, (tf.int64,tf.int64,tf.int64))
third = int(len(x)/3)
for value in ds.take(third):
print("value:",value)
TF 2 batch() & zip(): EXTRA CREDIT
import tensorflow as tf
ds1 = tf.data.Dataset.range(100)
ds2 = tf.data.Dataset.range(0, -100, -1)
ds3 = tf.data.Dataset.zip((ds1, ds2))
ds4 = ds3.batch(4)
for value in ds.take(10):
print("value:",value)
TF 2: Working with tf.keras
tf.keras.layers
tf.keras.models
tf.keras.optimizers
tf.keras.utils
tf.keras.regularizers
TF 2: tf.keras.models
 tf.keras.models.Sequential(): most common
 clone_model(...): Clone any Model instance
 load_model(...): Loads a model saved via save_model
 model_from_config(...): Instantiates a Keras model from its
config
 model_from_json(...):
 Parses a JSON model config file and returns a model instance
 model_from_yaml(...):
 Parses a yaml model config file and returns a model instance
 save_model(...): Saves a model to a HDF5 file
TF 2: tf.keras.layers
 tf.keras.models.Sequential()
 tf.keras.layers.Conv2D()
 tf.keras.layers.MaxPooling2D()
 tf.keras.layers.Flatten()
 tf.keras.layers.Dense()
 tf.keras.layers.Dropout(0.1)
 tf.keras.layers.BatchNormalization()
 tf.keras.layers.embedding()
 tf.keras.layers.RNN()
 tf.keras.layers.LSTM()
 tf.keras.layers.Bidirectional (ex: BERT)
TF 2: Activation Functions
tf.keras.activations.relu
tf.keras.activations.selu
tf.keras.activations.elu
tf.keras.activations.linear
tf.keras.activations.sigmoid
tf.keras.activations.softmax
tf.keras.activations.softplus
tf.keras.activations.tanh
Others …
TF 2: Built-in Datasets
tf.keras.datasets.boston_housing
tf.keras.datasets.cifar10
tf.keras.datasets.cifar100
tf.keras.datasets.fashion_mnist
tf.keras.datasets.imdb
tf.keras.datasets.mnist
tf.keras.datasets.reuters
TF 2: “Experimental”
 tf.keras.experimental.CosineDecay
 tf.keras.experimental.CosineDecayRestarts
 tf.keras.experimental.LinearCosineDecay
 tf.keras.experimental.NoisyLinearCosineDecay
 tf.keras.experimental.PeepholeLSTMCell
TF 2: Optimizers
tf.optimizers.Adadelta
tf.optimizers.Adagrad
tf.optimizers.Adam
tf.optimizers.Adamax
tf.optimizers.SGD
TF 2; Callbacks
tf.keras.callbacks.Callback
tf.keras.callbacks.EarlyStopping
tf.keras.callbacks.LambaCallback
tf.keras.callbacks.ModelCheckpoint
tf.keras.callbacks.TensorBoard
tf.keras.callbacks.TerminateOnNan
TF 2: tf.losses
 tf.losses.BinaryCrossEntropy
 tf.losses.CategoricalCrossEntropy
 tf.losses.CategoricalHinge
 tf.losses.CosineSimilarity
 tf.losses.Hinge
 tf.losses.Huber
 tf.losses.KLD
 tf.losses.KLDivergence
 tf.losses.MAE
 tf.losses.MSE
 tf.losses.SparseCategoricalCrossentropy
Other TF 2 Namespaces
tf.keras.regularizers (L1 and L2)
tf.keras.utils (to_categorical)
tf.keras.preprocessing.text.Tokenizer
TF 1.x Model APIs
 tf.core (“reduced” in TF 2)
 tf.layers (deprecated in TF 2)
 tf.keras (the primary API in TF 2)
 tf.estimator.Estimator (implementation in tf.keras)
 tf.contrib.learn.Estimator (Deprecated in TF 2)
Other TF 2 Features
TF Probability (Linear Regression)
TF Agents (Reinforcement Learning)
TF Text (NLP)
TF Federated
TF Privacy
Tensor2Tensor
TF Ranking
TFX
TF 2 (Keras), DL, and RL
The remaining slides briefly discuss TF 2 and:
 CNNs (Convolutional Neural Networks)
 RNNs (Recurrent Neural Networks)
 LSTMs (Long Short Term Memory)
 Autoencoders
 Variational Autoencoders
 Reinforcement Learning
 Some Keras-based code blocks
 Some useful links
Three Hidden Layers (Classifier)
Two Hidden Layers (Regression)
ML Datasets (Titanic)
Transition Between Layers
W = weight matrix between adjacent layers
x = an input vector of numbers
b = a bias vector of numbers
F = activation function (sigmoid, tanh, etc)
Y = output vector of numbers
Y = F(x*W + b)
TF 2/Keras and MLPs
 import tensorflow as tf
 . . .
 model = tf.keras.models.Sequential([
tf.keras.layers.Dense(10, input_dim=num_pixels, activation='relu’),
tf.keras.layers.Dense(30, activation='relu’),
tf.keras.layers.Dense(10, activation='relu’),
tf.keras.layers.Dense(num_classes, activation='softmax’),
tf.keras.optimizers.Adam(lr=0.01), loss='categorical_crossentropy',
metrics=['accuracy'])
])
TF 2/Keras and MLPs
 import tensorflow as tf
 . . .
 model = tf.keras.models.Sequential()
 model.add(tf.keras.layers.Dense(10, input_dim=num_pixels,
activation='relu'))
 model.add(tf.keras.layers.Dense(30, activation='relu'))
 model.add(tf.keras.layers.Dense(10, activation='relu'))
 model.add(tf.keras.layers.Dense(num_classes,
activation='softmax'))
 model.compile(tf.keras.optimizers.Adam(lr=0.01),
loss='categorical_crossentropy', metrics=['accuracy'])
CNNs: Convolution and Pooling
TF 2/Keras and CNNs
 import tensorflow as tf
 . . .
 model = tf.keras.models.Sequential([
 tf.keras. layers.Conv2D(30,(5,5),
input_shape=(28,28,1),activation='relu')),
 tf.keras.layers.MaxPooling2D(pool_size=(2, 2))),
 tf.keras.layers.Flatten(),
 tf.keras.layers.Dense(500, activation='relu’),
 tf.keras.layers.Dropout(0.5),
 tf.keras.layers.Dense(num_classes, activation='softmax’)
 ])
What are RNNs?
RNNs = Recurrent Neural Networks
RNNs capture/retain events in time
FF networks do not learn from the past
=> RNNs DO learn from the past
RNNs contain layers of recurrent neurons
Common Types of RNNs
1) Simple RNNs (minimal structure)
2) LSTMs (input, forget, output gates)
3) Gated Recurrent Unit (GRU)
NB: RNNs can be used with MNIST data
Some Use Cases for RNNs
1) Time Series Data
2) NLP (Natural Language Processing):
Processing documents
“analyzing” Trump’s tweets
3) RNNs and MNIST dataset
Recall: Feed Forward Structure
RNN Cell (left) and FF (right)
RNN “Big Picture” (cs231n)
RNN Neuron: same W for all steps
RNN and MNIST Dataset
Sample initialization values for RNN:
 n_steps = 28 # # of time steps
 n_inputs = 28 # number of rows
 n_neurons = 150 # 150 neurons in one layer
 n_outputs = 10 # 10 digit classes (0->9)
Remember that:
one layer = one memory cell
MNIST class count = size of output layer
TF 2/Keras and RNNs
import tensorflow as tf
...
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.SimpleRNN(20,
input_shape=(input_length, input_dim),
return_sequences=False))
...
Problems with RNNs
lengthy training time
Unrolled RNN will be very deep
Propagating gradients through many layers
Prone to vanishing gradient
Prone to exploding gradient
What are LSTMs?
Long short-term memory (LSTM):
a model for the short-term memory
Contains more state than a “regular” RNN
which can last for a long period of time
can forget and remember things selectively
Hochreiter/Schmidhuber proposed in 1997
improved in 2000 by Felix Gers' team
set accuracy records in multiple apps
Features of LSTMs
Used in Google speech recognition + Alpha Go
they avoid the vanishing gradient problem
Can track 1000s of discrete time steps
Used by international competition winners
Use Cases for LSTMs
Connected handwriting recognition
Speech recognition
Forecasting
Anomaly detection
Pattern recognition
Advantages of LSTMs
well-suited to classify/process/predict time series
More powerful: increased memory (=state)
Can explicitly add long-term or short-term state
even with time lags of unknown size/duration between
events
The Primary LSTM Gates
Input, Forget, and Output gates (FIO)
The main operational block of LSTMs
an "information filter"
a multivariate input gate
some inputs are blocked
some inputs "go through"
"remember" necessary information
FIO gates use the sigmoid activation function
The cell state uses the tanh activation function
LSTM Gates and Cell State
 1) the input gate controls:
the extent to which a new value flows into the cell state
 2) the forget gate controls:
the extent to which a value remains in the cell state
 3) the output gate controls:
The portion of the cell state that’s part of the output
 4) the cell state maintains long-term memory
 => LSTMs store more memory than basic RNN cells
Common LSTM Variables
 # LSTM unrolled through 28 time steps (with MNIST):
 time_steps = 28
 # number of hidden LSTM units:
 num_units = 128
 # number of rows of 28 pixels:
 n_inputs = 28
 # number of pixels per row:
 input_size = 28
 # mnist has 10 classes (0-9):
 n_classes = 10
 # size of input batch:
 batch_size = 128
Basic LSTM Cells
LSTM Formulas
TF 2/Keras and LSTMs
 import tensorflow as tf
 . . .
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.LSTMCell(6,batch_input_shape=(1,1,1),kernel_i
nitializer='ones',stateful=True))
model.add(tf.keras.layers.Dense(1))
. . .
=> LSTM versus LSTMCell:
https://stackoverflow.com/questions/48187283/whats-the-difference-
between-lstm-and-lstmcell
=> How to define a custom LSTM cell:
https://stackoverflow.com/questions/54231440/define-custom-lstm-cell-in-
keras
TF 2/Keras and LSTM/RNN/GRU
 import tensorflow as tf
 ...
 model = Sequential()
 model.add(tf.keras.layers.LSTM(64,
return_sequences=True, input_shape=(10, 64)))
 model.add(tf.keras.layers.SimpleRNN(32,
return_sequences=True))
 model.add(tf.keras.layers.GRU(10, ...)
 ...
TF 2/Keras & BiDirectional LSTMs
 import tensorflow as tf
 . . .
 model = Sequential()
 model.add(Bidirectional(LSTM(10,
return_sequences=True), input_shape=(5,10)))
 model.add(Bidirectional(LSTM(10)))
 model.add(Dense(5))
 model.add(Activation('softmax'))
 model.compile(loss='categorical_crossentropy',
optimizer='rmsprop')
Autoencoders
Variational Autoencoders (2013)
AEs, VAEs, and GANs
 https://jaan.io/what-is-variational-autoencoder-vae-tutorial/
 TensorFlow 2 and Autoencoders:
https://gist.github.com/AFAgarap/326af55e36be0529c507f1599f88c06e
 TensorFlow 2 and Variational Autoencoders:
 Convolutional Variational Autoencoder:
https://www.tensorflow.org/alpha/tutorials/generative/cvae
 Variational Autoencoders and GANS:
https://www.youtube.com/watch?v=KFX4apL7a4s
Reinforcement Learning
Reinforcement Learning
Reinforcement Learning
Deep Reinforcement Learning
Reinforcement Learning
 Dopamine on TensorFlow (Research Framework):
https://github.com/google/dopamine
 TF-Agents library for RL in TensorFlow:
https://github.com/tensorflow/agents
 Keras and Reinforcement Learning:
https://github.com/keras-rl/keras-rl
 OpenAI universe, DeepMind Lab, TensorForce
About Me: Recent Books
 1) Python3 and Machine Learning (2020)
 2) Angular 9 and Deep Learning (2020)
 3) Angular 8 & Machine Learning (2020)
 4) AI/ML/DL: Concepts and Code (2020)
 5) Bash Programming on Mac (2020)
 6) TensorFlow 2 Pocket Primer (2019)
 7) TensorFlow 1.x Pocket Primer (2019)
 8) Python for TensorFlow (2019)
 9) C Programming Pocket Primer (2019)
About Me: Less Recent Books
 10) RegEx Pocket Primer (2018)
 11) Data Cleaning Pocket Primer (2018)
 12) Angular Pocket Primer (2017)
 13) Android Pocket Primer (2017)
 14) CSS3 Pocket Primer (2016)
 15) SVG Pocket Primer (2016)
 16) Python Pocket Primer (2015)
 17) D3 Pocket Primer (2015)
 18) HTML5 Mobile Pocket Primer (2014)
About Me: Older Books
 19) jQuery, CSS3, and HTML5 (2013)
 20) HTML5 Pocket Primer (2013)
 21) jQuery Pocket Primer (2013)
 22) HTML5 Canvas (2012)
 23) Flash on Android (2011)
 24) Web 2.0 Fundamentals (2010)
 25) MS Silverlight Graphics (2008)
 26) Fundamentals of SVG (2003)
 27) Java Graphics Library (2002)
What I do (Training)
ML/DL/RL Instructor at UCSC (Santa Clara):
Machine Learning (Python) (09/09/2019) 10 weeks
Deep Learning (TF2/Keras) (10/04/2019) 10 weeks
Deep Learning (Keras/TF 2) (01/30/2020) 10 weeks
DL, NLP, RL (Adv Topics) (TBD/2020) 10 weeks
NLP (Transformers, BERT, etc) (TBD/2020) 10 weeks
RL (PPO, A3C, SAC, etc) (TBD/2020) 10 weeks
UCSC link:
https://www.ucsc-extension.edu/certificate-program/offering/deep-learning-
and-artificial-intelligence-tensorflow

More Related Content

What's hot

Introduction to TensorFlow 2.0
Introduction to TensorFlow 2.0Introduction to TensorFlow 2.0
Introduction to TensorFlow 2.0
Databricks
 
TensorFlow example for AI Ukraine2016
TensorFlow example  for AI Ukraine2016TensorFlow example  for AI Ukraine2016
TensorFlow example for AI Ukraine2016
Andrii Babii
 
Introduction to Deep Learning and TensorFlow
Introduction to Deep Learning and TensorFlowIntroduction to Deep Learning and TensorFlow
Introduction to Deep Learning and TensorFlow
Oswald Campesato
 
Intro to Deep Learning, TensorFlow, and tensorflow.js
Intro to Deep Learning, TensorFlow, and tensorflow.jsIntro to Deep Learning, TensorFlow, and tensorflow.js
Intro to Deep Learning, TensorFlow, and tensorflow.js
Oswald Campesato
 
Deep Learning in Your Browser
Deep Learning in Your BrowserDeep Learning in Your Browser
Deep Learning in Your Browser
Oswald Campesato
 
Introduction To TensorFlow | Deep Learning Using TensorFlow | CloudxLab
Introduction To TensorFlow | Deep Learning Using TensorFlow | CloudxLabIntroduction To TensorFlow | Deep Learning Using TensorFlow | CloudxLab
Introduction To TensorFlow | Deep Learning Using TensorFlow | CloudxLab
CloudxLab
 
Tensorflow - Intro (2017)
Tensorflow - Intro (2017)Tensorflow - Intro (2017)
Tensorflow - Intro (2017)
Alessio Tonioni
 
Google TensorFlow Tutorial
Google TensorFlow TutorialGoogle TensorFlow Tutorial
Google TensorFlow Tutorial
台灣資料科學年會
 
Introduction to TensorFlow, by Machine Learning at Berkeley
Introduction to TensorFlow, by Machine Learning at BerkeleyIntroduction to TensorFlow, by Machine Learning at Berkeley
Introduction to TensorFlow, by Machine Learning at Berkeley
Ted Xiao
 
TensorFlow
TensorFlowTensorFlow
TensorFlow
jirimaterna
 
Introduction to Tensorflow
Introduction to TensorflowIntroduction to Tensorflow
Introduction to Tensorflow
Tzar Umang
 
Tensor board
Tensor boardTensor board
Tensor board
Sung Kim
 
Deep Learning in your Browser: powered by WebGL
Deep Learning in your Browser: powered by WebGLDeep Learning in your Browser: powered by WebGL
Deep Learning in your Browser: powered by WebGL
Oswald Campesato
 
"PyTorch Deep Learning Framework: Status and Directions," a Presentation from...
"PyTorch Deep Learning Framework: Status and Directions," a Presentation from..."PyTorch Deep Learning Framework: Status and Directions," a Presentation from...
"PyTorch Deep Learning Framework: Status and Directions," a Presentation from...
Edge AI and Vision Alliance
 
TensorFlow Tutorial
TensorFlow TutorialTensorFlow Tutorial
TensorFlow Tutorial
NamHyuk Ahn
 
Introduction to Machine Learning with TensorFlow
Introduction to Machine Learning with TensorFlowIntroduction to Machine Learning with TensorFlow
Introduction to Machine Learning with TensorFlow
Paolo Tomeo
 
TensorFlow Tutorial | Deep Learning Using TensorFlow | TensorFlow Tutorial Py...
TensorFlow Tutorial | Deep Learning Using TensorFlow | TensorFlow Tutorial Py...TensorFlow Tutorial | Deep Learning Using TensorFlow | TensorFlow Tutorial Py...
TensorFlow Tutorial | Deep Learning Using TensorFlow | TensorFlow Tutorial Py...
Edureka!
 
Tensor flow (1)
Tensor flow (1)Tensor flow (1)
Tensor flow (1)
景逸 王
 
Explanation on Tensorflow example -Deep mnist for expert
Explanation on Tensorflow example -Deep mnist for expertExplanation on Tensorflow example -Deep mnist for expert
Explanation on Tensorflow example -Deep mnist for expert
홍배 김
 
Introduction To TensorFlow | Deep Learning with TensorFlow | TensorFlow For B...
Introduction To TensorFlow | Deep Learning with TensorFlow | TensorFlow For B...Introduction To TensorFlow | Deep Learning with TensorFlow | TensorFlow For B...
Introduction To TensorFlow | Deep Learning with TensorFlow | TensorFlow For B...
Edureka!
 

What's hot (20)

Introduction to TensorFlow 2.0
Introduction to TensorFlow 2.0Introduction to TensorFlow 2.0
Introduction to TensorFlow 2.0
 
TensorFlow example for AI Ukraine2016
TensorFlow example  for AI Ukraine2016TensorFlow example  for AI Ukraine2016
TensorFlow example for AI Ukraine2016
 
Introduction to Deep Learning and TensorFlow
Introduction to Deep Learning and TensorFlowIntroduction to Deep Learning and TensorFlow
Introduction to Deep Learning and TensorFlow
 
Intro to Deep Learning, TensorFlow, and tensorflow.js
Intro to Deep Learning, TensorFlow, and tensorflow.jsIntro to Deep Learning, TensorFlow, and tensorflow.js
Intro to Deep Learning, TensorFlow, and tensorflow.js
 
Deep Learning in Your Browser
Deep Learning in Your BrowserDeep Learning in Your Browser
Deep Learning in Your Browser
 
Introduction To TensorFlow | Deep Learning Using TensorFlow | CloudxLab
Introduction To TensorFlow | Deep Learning Using TensorFlow | CloudxLabIntroduction To TensorFlow | Deep Learning Using TensorFlow | CloudxLab
Introduction To TensorFlow | Deep Learning Using TensorFlow | CloudxLab
 
Tensorflow - Intro (2017)
Tensorflow - Intro (2017)Tensorflow - Intro (2017)
Tensorflow - Intro (2017)
 
Google TensorFlow Tutorial
Google TensorFlow TutorialGoogle TensorFlow Tutorial
Google TensorFlow Tutorial
 
Introduction to TensorFlow, by Machine Learning at Berkeley
Introduction to TensorFlow, by Machine Learning at BerkeleyIntroduction to TensorFlow, by Machine Learning at Berkeley
Introduction to TensorFlow, by Machine Learning at Berkeley
 
TensorFlow
TensorFlowTensorFlow
TensorFlow
 
Introduction to Tensorflow
Introduction to TensorflowIntroduction to Tensorflow
Introduction to Tensorflow
 
Tensor board
Tensor boardTensor board
Tensor board
 
Deep Learning in your Browser: powered by WebGL
Deep Learning in your Browser: powered by WebGLDeep Learning in your Browser: powered by WebGL
Deep Learning in your Browser: powered by WebGL
 
"PyTorch Deep Learning Framework: Status and Directions," a Presentation from...
"PyTorch Deep Learning Framework: Status and Directions," a Presentation from..."PyTorch Deep Learning Framework: Status and Directions," a Presentation from...
"PyTorch Deep Learning Framework: Status and Directions," a Presentation from...
 
TensorFlow Tutorial
TensorFlow TutorialTensorFlow Tutorial
TensorFlow Tutorial
 
Introduction to Machine Learning with TensorFlow
Introduction to Machine Learning with TensorFlowIntroduction to Machine Learning with TensorFlow
Introduction to Machine Learning with TensorFlow
 
TensorFlow Tutorial | Deep Learning Using TensorFlow | TensorFlow Tutorial Py...
TensorFlow Tutorial | Deep Learning Using TensorFlow | TensorFlow Tutorial Py...TensorFlow Tutorial | Deep Learning Using TensorFlow | TensorFlow Tutorial Py...
TensorFlow Tutorial | Deep Learning Using TensorFlow | TensorFlow Tutorial Py...
 
Tensor flow (1)
Tensor flow (1)Tensor flow (1)
Tensor flow (1)
 
Explanation on Tensorflow example -Deep mnist for expert
Explanation on Tensorflow example -Deep mnist for expertExplanation on Tensorflow example -Deep mnist for expert
Explanation on Tensorflow example -Deep mnist for expert
 
Introduction To TensorFlow | Deep Learning with TensorFlow | TensorFlow For B...
Introduction To TensorFlow | Deep Learning with TensorFlow | TensorFlow For B...Introduction To TensorFlow | Deep Learning with TensorFlow | TensorFlow For B...
Introduction To TensorFlow | Deep Learning with TensorFlow | TensorFlow For B...
 

Similar to Introduction to TensorFlow 2 and Keras

Deep Learning, Scala, and Spark
Deep Learning, Scala, and SparkDeep Learning, Scala, and Spark
Deep Learning, Scala, and Spark
Oswald Campesato
 
TensorFlow Tutorial.pdf
TensorFlow Tutorial.pdfTensorFlow Tutorial.pdf
TensorFlow Tutorial.pdf
Antonio Espinosa
 
TensorFlow for IITians
TensorFlow for IITiansTensorFlow for IITians
TensorFlow for IITians
Ashish Bansal
 
Introduction to Deep Learning, Keras, and TensorFlow
Introduction to Deep Learning, Keras, and TensorFlowIntroduction to Deep Learning, Keras, and TensorFlow
Introduction to Deep Learning, Keras, and TensorFlow
Sri Ambati
 
Boosting machine learning workflow with TensorFlow 2.0
Boosting machine learning workflow with TensorFlow 2.0Boosting machine learning workflow with TensorFlow 2.0
Boosting machine learning workflow with TensorFlow 2.0
Jeongkyu Shin
 
TensorFlow Dev Summit 2018 Extended: TensorFlow Eager Execution
TensorFlow Dev Summit 2018 Extended: TensorFlow Eager ExecutionTensorFlow Dev Summit 2018 Extended: TensorFlow Eager Execution
TensorFlow Dev Summit 2018 Extended: TensorFlow Eager Execution
Taegyun Jeon
 
Hands-on Learning with KubeFlow + Keras/TensorFlow 2.0 + TF Extended (TFX) + ...
Hands-on Learning with KubeFlow + Keras/TensorFlow 2.0 + TF Extended (TFX) + ...Hands-on Learning with KubeFlow + Keras/TensorFlow 2.0 + TF Extended (TFX) + ...
Hands-on Learning with KubeFlow + Keras/TensorFlow 2.0 + TF Extended (TFX) + ...
Chris Fregly
 
190111 tf2 preview_jwkang_pub
190111 tf2 preview_jwkang_pub190111 tf2 preview_jwkang_pub
190111 tf2 preview_jwkang_pub
Jaewook. Kang
 
Paolo Galeone - Dissecting tf.function to discover auto graph strengths and s...
Paolo Galeone - Dissecting tf.function to discover auto graph strengths and s...Paolo Galeone - Dissecting tf.function to discover auto graph strengths and s...
Paolo Galeone - Dissecting tf.function to discover auto graph strengths and s...
MeetupDataScienceRoma
 
Advanced Spark and TensorFlow Meetup May 26, 2016
Advanced Spark and TensorFlow Meetup May 26, 2016Advanced Spark and TensorFlow Meetup May 26, 2016
Advanced Spark and TensorFlow Meetup May 26, 2016
Chris Fregly
 
Theano vs TensorFlow | Edureka
Theano vs TensorFlow | EdurekaTheano vs TensorFlow | Edureka
Theano vs TensorFlow | Edureka
Edureka!
 
Real Time Streaming Data with Kafka and TensorFlow (Yong Tang, MobileIron) Ka...
Real Time Streaming Data with Kafka and TensorFlow (Yong Tang, MobileIron) Ka...Real Time Streaming Data with Kafka and TensorFlow (Yong Tang, MobileIron) Ka...
Real Time Streaming Data with Kafka and TensorFlow (Yong Tang, MobileIron) Ka...
confluent
 
Meetup tensorframes
Meetup tensorframesMeetup tensorframes
Meetup tensorframes
Paolo Platter
 
Tensorflow in practice by Engineer - donghwi cha
Tensorflow in practice by Engineer - donghwi chaTensorflow in practice by Engineer - donghwi cha
Tensorflow in practice by Engineer - donghwi cha
Donghwi Cha
 
Tensorflow basics
Tensorflow basicsTensorflow basics
Tensorflow basics
IshaNemaCSPOcertifie
 
Lucio Floretta - TensorFlow and Deep Learning without a PhD - Codemotion Mila...
Lucio Floretta - TensorFlow and Deep Learning without a PhD - Codemotion Mila...Lucio Floretta - TensorFlow and Deep Learning without a PhD - Codemotion Mila...
Lucio Floretta - TensorFlow and Deep Learning without a PhD - Codemotion Mila...
Codemotion
 
TensorFlow Dev Summit 2017 요약
TensorFlow Dev Summit 2017 요약TensorFlow Dev Summit 2017 요약
TensorFlow Dev Summit 2017 요약
Jin Joong Kim
 
NTU ML TENSORFLOW
NTU ML TENSORFLOWNTU ML TENSORFLOW
NTU ML TENSORFLOW
Mark Chang
 
From Tensorflow Graph to Tensorflow Eager
From Tensorflow Graph to Tensorflow EagerFrom Tensorflow Graph to Tensorflow Eager
From Tensorflow Graph to Tensorflow Eager
Guy Hadash
 
Swift for tensorflow
Swift for tensorflowSwift for tensorflow
Swift for tensorflow
규영 허
 

Similar to Introduction to TensorFlow 2 and Keras (20)

Deep Learning, Scala, and Spark
Deep Learning, Scala, and SparkDeep Learning, Scala, and Spark
Deep Learning, Scala, and Spark
 
TensorFlow Tutorial.pdf
TensorFlow Tutorial.pdfTensorFlow Tutorial.pdf
TensorFlow Tutorial.pdf
 
TensorFlow for IITians
TensorFlow for IITiansTensorFlow for IITians
TensorFlow for IITians
 
Introduction to Deep Learning, Keras, and TensorFlow
Introduction to Deep Learning, Keras, and TensorFlowIntroduction to Deep Learning, Keras, and TensorFlow
Introduction to Deep Learning, Keras, and TensorFlow
 
Boosting machine learning workflow with TensorFlow 2.0
Boosting machine learning workflow with TensorFlow 2.0Boosting machine learning workflow with TensorFlow 2.0
Boosting machine learning workflow with TensorFlow 2.0
 
TensorFlow Dev Summit 2018 Extended: TensorFlow Eager Execution
TensorFlow Dev Summit 2018 Extended: TensorFlow Eager ExecutionTensorFlow Dev Summit 2018 Extended: TensorFlow Eager Execution
TensorFlow Dev Summit 2018 Extended: TensorFlow Eager Execution
 
Hands-on Learning with KubeFlow + Keras/TensorFlow 2.0 + TF Extended (TFX) + ...
Hands-on Learning with KubeFlow + Keras/TensorFlow 2.0 + TF Extended (TFX) + ...Hands-on Learning with KubeFlow + Keras/TensorFlow 2.0 + TF Extended (TFX) + ...
Hands-on Learning with KubeFlow + Keras/TensorFlow 2.0 + TF Extended (TFX) + ...
 
190111 tf2 preview_jwkang_pub
190111 tf2 preview_jwkang_pub190111 tf2 preview_jwkang_pub
190111 tf2 preview_jwkang_pub
 
Paolo Galeone - Dissecting tf.function to discover auto graph strengths and s...
Paolo Galeone - Dissecting tf.function to discover auto graph strengths and s...Paolo Galeone - Dissecting tf.function to discover auto graph strengths and s...
Paolo Galeone - Dissecting tf.function to discover auto graph strengths and s...
 
Advanced Spark and TensorFlow Meetup May 26, 2016
Advanced Spark and TensorFlow Meetup May 26, 2016Advanced Spark and TensorFlow Meetup May 26, 2016
Advanced Spark and TensorFlow Meetup May 26, 2016
 
Theano vs TensorFlow | Edureka
Theano vs TensorFlow | EdurekaTheano vs TensorFlow | Edureka
Theano vs TensorFlow | Edureka
 
Real Time Streaming Data with Kafka and TensorFlow (Yong Tang, MobileIron) Ka...
Real Time Streaming Data with Kafka and TensorFlow (Yong Tang, MobileIron) Ka...Real Time Streaming Data with Kafka and TensorFlow (Yong Tang, MobileIron) Ka...
Real Time Streaming Data with Kafka and TensorFlow (Yong Tang, MobileIron) Ka...
 
Meetup tensorframes
Meetup tensorframesMeetup tensorframes
Meetup tensorframes
 
Tensorflow in practice by Engineer - donghwi cha
Tensorflow in practice by Engineer - donghwi chaTensorflow in practice by Engineer - donghwi cha
Tensorflow in practice by Engineer - donghwi cha
 
Tensorflow basics
Tensorflow basicsTensorflow basics
Tensorflow basics
 
Lucio Floretta - TensorFlow and Deep Learning without a PhD - Codemotion Mila...
Lucio Floretta - TensorFlow and Deep Learning without a PhD - Codemotion Mila...Lucio Floretta - TensorFlow and Deep Learning without a PhD - Codemotion Mila...
Lucio Floretta - TensorFlow and Deep Learning without a PhD - Codemotion Mila...
 
TensorFlow Dev Summit 2017 요약
TensorFlow Dev Summit 2017 요약TensorFlow Dev Summit 2017 요약
TensorFlow Dev Summit 2017 요약
 
NTU ML TENSORFLOW
NTU ML TENSORFLOWNTU ML TENSORFLOW
NTU ML TENSORFLOW
 
From Tensorflow Graph to Tensorflow Eager
From Tensorflow Graph to Tensorflow EagerFrom Tensorflow Graph to Tensorflow Eager
From Tensorflow Graph to Tensorflow Eager
 
Swift for tensorflow
Swift for tensorflowSwift for tensorflow
Swift for tensorflow
 

More from Oswald Campesato

Introduction to Deep Learning
Introduction to Deep LearningIntroduction to Deep Learning
Introduction to Deep Learning
Oswald Campesato
 
"An Introduction to AI and Deep Learning"
"An Introduction to AI and Deep Learning""An Introduction to AI and Deep Learning"
"An Introduction to AI and Deep Learning"
Oswald Campesato
 
Introduction to Deep Learning for Non-Programmers
Introduction to Deep Learning for Non-ProgrammersIntroduction to Deep Learning for Non-Programmers
Introduction to Deep Learning for Non-Programmers
Oswald Campesato
 
Deep Learning and TensorFlow
Deep Learning and TensorFlowDeep Learning and TensorFlow
Deep Learning and TensorFlow
Oswald Campesato
 
Introduction to Deep Learning and Tensorflow
Introduction to Deep Learning and TensorflowIntroduction to Deep Learning and Tensorflow
Introduction to Deep Learning and Tensorflow
Oswald Campesato
 
Deep Learning, Keras, and TensorFlow
Deep Learning, Keras, and TensorFlowDeep Learning, Keras, and TensorFlow
Deep Learning, Keras, and TensorFlow
Oswald Campesato
 
C++ and Deep Learning
C++ and Deep LearningC++ and Deep Learning
C++ and Deep Learning
Oswald Campesato
 
Scala and Deep Learning
Scala and Deep LearningScala and Deep Learning
Scala and Deep Learning
Oswald Campesato
 
Java and Deep Learning (Introduction)
Java and Deep Learning (Introduction)Java and Deep Learning (Introduction)
Java and Deep Learning (Introduction)
Oswald Campesato
 
Deep Learning: R with Keras and TensorFlow
Deep Learning: R with Keras and TensorFlowDeep Learning: R with Keras and TensorFlow
Deep Learning: R with Keras and TensorFlow
Oswald Campesato
 
Java and Deep Learning
Java and Deep LearningJava and Deep Learning
Java and Deep Learning
Oswald Campesato
 
Diving into Deep Learning (Silicon Valley Code Camp 2017)
Diving into Deep Learning (Silicon Valley Code Camp 2017)Diving into Deep Learning (Silicon Valley Code Camp 2017)
Diving into Deep Learning (Silicon Valley Code Camp 2017)
Oswald Campesato
 
Android and Deep Learning
Android and Deep LearningAndroid and Deep Learning
Android and Deep Learning
Oswald Campesato
 
Introduction to Kotlin
Introduction to KotlinIntroduction to Kotlin
Introduction to Kotlin
Oswald Campesato
 
TypeScript and Deep Learning
TypeScript and Deep LearningTypeScript and Deep Learning
TypeScript and Deep Learning
Oswald Campesato
 
D3, TypeScript, and Deep Learning
D3, TypeScript, and Deep LearningD3, TypeScript, and Deep Learning
D3, TypeScript, and Deep Learning
Oswald Campesato
 
D3, TypeScript, and Deep Learning
D3, TypeScript, and Deep LearningD3, TypeScript, and Deep Learning
D3, TypeScript, and Deep Learning
Oswald Campesato
 
Angular and Deep Learning
Angular and Deep LearningAngular and Deep Learning
Angular and Deep Learning
Oswald Campesato
 

More from Oswald Campesato (18)

Introduction to Deep Learning
Introduction to Deep LearningIntroduction to Deep Learning
Introduction to Deep Learning
 
"An Introduction to AI and Deep Learning"
"An Introduction to AI and Deep Learning""An Introduction to AI and Deep Learning"
"An Introduction to AI and Deep Learning"
 
Introduction to Deep Learning for Non-Programmers
Introduction to Deep Learning for Non-ProgrammersIntroduction to Deep Learning for Non-Programmers
Introduction to Deep Learning for Non-Programmers
 
Deep Learning and TensorFlow
Deep Learning and TensorFlowDeep Learning and TensorFlow
Deep Learning and TensorFlow
 
Introduction to Deep Learning and Tensorflow
Introduction to Deep Learning and TensorflowIntroduction to Deep Learning and Tensorflow
Introduction to Deep Learning and Tensorflow
 
Deep Learning, Keras, and TensorFlow
Deep Learning, Keras, and TensorFlowDeep Learning, Keras, and TensorFlow
Deep Learning, Keras, and TensorFlow
 
C++ and Deep Learning
C++ and Deep LearningC++ and Deep Learning
C++ and Deep Learning
 
Scala and Deep Learning
Scala and Deep LearningScala and Deep Learning
Scala and Deep Learning
 
Java and Deep Learning (Introduction)
Java and Deep Learning (Introduction)Java and Deep Learning (Introduction)
Java and Deep Learning (Introduction)
 
Deep Learning: R with Keras and TensorFlow
Deep Learning: R with Keras and TensorFlowDeep Learning: R with Keras and TensorFlow
Deep Learning: R with Keras and TensorFlow
 
Java and Deep Learning
Java and Deep LearningJava and Deep Learning
Java and Deep Learning
 
Diving into Deep Learning (Silicon Valley Code Camp 2017)
Diving into Deep Learning (Silicon Valley Code Camp 2017)Diving into Deep Learning (Silicon Valley Code Camp 2017)
Diving into Deep Learning (Silicon Valley Code Camp 2017)
 
Android and Deep Learning
Android and Deep LearningAndroid and Deep Learning
Android and Deep Learning
 
Introduction to Kotlin
Introduction to KotlinIntroduction to Kotlin
Introduction to Kotlin
 
TypeScript and Deep Learning
TypeScript and Deep LearningTypeScript and Deep Learning
TypeScript and Deep Learning
 
D3, TypeScript, and Deep Learning
D3, TypeScript, and Deep LearningD3, TypeScript, and Deep Learning
D3, TypeScript, and Deep Learning
 
D3, TypeScript, and Deep Learning
D3, TypeScript, and Deep LearningD3, TypeScript, and Deep Learning
D3, TypeScript, and Deep Learning
 
Angular and Deep Learning
Angular and Deep LearningAngular and Deep Learning
Angular and Deep Learning
 

Recently uploaded

Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
BookNet Canada
 
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
DanBrown980551
 
To Graph or Not to Graph Knowledge Graph Architectures and LLMs
To Graph or Not to Graph Knowledge Graph Architectures and LLMsTo Graph or Not to Graph Knowledge Graph Architectures and LLMs
To Graph or Not to Graph Knowledge Graph Architectures and LLMs
Paul Groth
 
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
James Anderson
 
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptx
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptxIOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptx
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptx
Abida Shariff
 
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...
Ramesh Iyer
 
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdfSmart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
91mobiles
 
FIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdfFIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance
 
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
Product School
 
PHP Frameworks: I want to break free (IPC Berlin 2024)
PHP Frameworks: I want to break free (IPC Berlin 2024)PHP Frameworks: I want to break free (IPC Berlin 2024)
PHP Frameworks: I want to break free (IPC Berlin 2024)
Ralf Eggert
 
"Impact of front-end architecture on development cost", Viktor Turskyi
"Impact of front-end architecture on development cost", Viktor Turskyi"Impact of front-end architecture on development cost", Viktor Turskyi
"Impact of front-end architecture on development cost", Viktor Turskyi
Fwdays
 
Neuro-symbolic is not enough, we need neuro-*semantic*
Neuro-symbolic is not enough, we need neuro-*semantic*Neuro-symbolic is not enough, we need neuro-*semantic*
Neuro-symbolic is not enough, we need neuro-*semantic*
Frank van Harmelen
 
Assuring Contact Center Experiences for Your Customers With ThousandEyes
Assuring Contact Center Experiences for Your Customers With ThousandEyesAssuring Contact Center Experiences for Your Customers With ThousandEyes
Assuring Contact Center Experiences for Your Customers With ThousandEyes
ThousandEyes
 
Search and Society: Reimagining Information Access for Radical Futures
Search and Society: Reimagining Information Access for Radical FuturesSearch and Society: Reimagining Information Access for Radical Futures
Search and Society: Reimagining Information Access for Radical Futures
Bhaskar Mitra
 
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Product School
 
Mission to Decommission: Importance of Decommissioning Products to Increase E...
Mission to Decommission: Importance of Decommissioning Products to Increase E...Mission to Decommission: Importance of Decommissioning Products to Increase E...
Mission to Decommission: Importance of Decommissioning Products to Increase E...
Product School
 
Leading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdfLeading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdf
OnBoard
 
Connector Corner: Automate dynamic content and events by pushing a button
Connector Corner: Automate dynamic content and events by pushing a buttonConnector Corner: Automate dynamic content and events by pushing a button
Connector Corner: Automate dynamic content and events by pushing a button
DianaGray10
 
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualitySoftware Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
Inflectra
 
The Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and SalesThe Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and Sales
Laura Byrne
 

Recently uploaded (20)

Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
 
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
 
To Graph or Not to Graph Knowledge Graph Architectures and LLMs
To Graph or Not to Graph Knowledge Graph Architectures and LLMsTo Graph or Not to Graph Knowledge Graph Architectures and LLMs
To Graph or Not to Graph Knowledge Graph Architectures and LLMs
 
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
 
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptx
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptxIOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptx
IOS-PENTESTING-BEGINNERS-PRACTICAL-GUIDE-.pptx
 
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...
 
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdfSmart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
 
FIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdfFIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdf
 
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
From Daily Decisions to Bottom Line: Connecting Product Work to Revenue by VP...
 
PHP Frameworks: I want to break free (IPC Berlin 2024)
PHP Frameworks: I want to break free (IPC Berlin 2024)PHP Frameworks: I want to break free (IPC Berlin 2024)
PHP Frameworks: I want to break free (IPC Berlin 2024)
 
"Impact of front-end architecture on development cost", Viktor Turskyi
"Impact of front-end architecture on development cost", Viktor Turskyi"Impact of front-end architecture on development cost", Viktor Turskyi
"Impact of front-end architecture on development cost", Viktor Turskyi
 
Neuro-symbolic is not enough, we need neuro-*semantic*
Neuro-symbolic is not enough, we need neuro-*semantic*Neuro-symbolic is not enough, we need neuro-*semantic*
Neuro-symbolic is not enough, we need neuro-*semantic*
 
Assuring Contact Center Experiences for Your Customers With ThousandEyes
Assuring Contact Center Experiences for Your Customers With ThousandEyesAssuring Contact Center Experiences for Your Customers With ThousandEyes
Assuring Contact Center Experiences for Your Customers With ThousandEyes
 
Search and Society: Reimagining Information Access for Radical Futures
Search and Society: Reimagining Information Access for Radical FuturesSearch and Society: Reimagining Information Access for Radical Futures
Search and Society: Reimagining Information Access for Radical Futures
 
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
 
Mission to Decommission: Importance of Decommissioning Products to Increase E...
Mission to Decommission: Importance of Decommissioning Products to Increase E...Mission to Decommission: Importance of Decommissioning Products to Increase E...
Mission to Decommission: Importance of Decommissioning Products to Increase E...
 
Leading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdfLeading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdf
 
Connector Corner: Automate dynamic content and events by pushing a button
Connector Corner: Automate dynamic content and events by pushing a buttonConnector Corner: Automate dynamic content and events by pushing a button
Connector Corner: Automate dynamic content and events by pushing a button
 
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualitySoftware Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
 
The Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and SalesThe Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and Sales
 

Introduction to TensorFlow 2 and Keras

  • 1. Introduction to TensorFlow 2 GDG Meetup SF Google Launchpad 11/19/2019 Oswald Campesato ocampesato@yahoo.com
  • 2. Highlights/Overview  What is TensorFlow 2?  Major Changes in TF 2  Working with strings/arrays/tensors  Working with TF 2 @tf.function decorator  Working with TF 2 generators  Working with TF 2 tf.data.Dataset  Datasets in TF 1.x versus TF 2  Working with TF 2 tf.keras  CNNs, RNNs, LSTMs, Bidirectional LSTMs  Reinforcement Learning/TF Agents
  • 3. Highlights/Overview This session does not cover:  “the new vision” of TF 2  “the common practices” of TF 2  “lessons learned” from TF 1.x  a detailed “roadmap” from TF 1.x to TF 2
  • 4. What is TensorFlow 2? An open source framework for ML and DL Created by Google (TF 1.x released 11/2015) Evolved from Google Brain Linux and Mac OS X support (VM for Windows) Production release: 10/2019 TF home page: https://www.tensorflow.org/
  • 5. TF 2 Platform Support Tested and supported on 64-bit systems Ubuntu 16.04 or later Windows 7 or later macOS 10.12.6 (Sierra) or later (no GPU support) Raspbian 9.0 or later
  • 6. TF 2 Docker Images  https://hub.docker.com/r/tensorflow/tensorflow/ $ docker run -it --rm tensorflow/tensorflow bash  Start a CPU-only container with Python 2: $ docker run -it --rm --runtime=nvidia tensorflow/tensorflow:latest- gpu-py3 python  Start a GPU container with Python 3 & Python interpreter: $ docker run -it --rm -v $(realpath ~/notebooks):/tf/notebooks -p 8888:8888 tensorflow/tensorflow:late
  • 7. What is TensorFlow 2?  ML/DL open source platform  Support for Python, Java, C++  Desktop, Server, Mobile, Web  CPU/GPU/TPU support  Visualization via TensorBoard  Can be embedded in Python scripts  Installation: pip install tensorflow  Ex: pip install tensorflow==2.0.0-alpha0
  • 8. TensorFlow Use Cases (Generic) Image recognition Computer vision Voice/sound recognition Time series analysis Language detection Language translation Text-based processing Handwriting Recognition
  • 9. Major Changes in TF 2  TF 2 Eager Execution: default mode  @tf.function decorator: instead of tf.Session()  AutoGraph: graph generated in @tf.function()  Generators: used with tf.data.Dataset (TF 2)  Differentiation: calculates gradients  tf.GradientTape: automatic differentiation
  • 10. Removed from TensorFlow 2  tf.Session()  tf.placeholder()  tf.global_initializer()  feed_dict  Variable scopes  tf.contrib code  tf.flags and tf.contrib  Global variables  => TF 1.x functions moved to compat.v1
  • 11. Upgrading Files/Directories to TF 2  The TF 2 upgrade script: https://www.tensorflow.org/alpha/guide/upgrade  Script name: tf_upgrade_v2  install TF to get the upgrade script: pip install tensorflow (NOT pip3)
  • 12. Upgrading Files/Directories to TF 2 1) upgrade one file: tf_upgrade_v2 --infile oldtf.py --outfile newtf.py  The preceding command creates report.txt Upgrade an entire directory: tf_upgrade_v2 –intree mydir --outtree newtf.py – copyotherfiles False NB: do NOT make manual upgrade changes
  • 13. Migrating to TensorFlow 2  1) do not manually update parts of your code  2) script won't work with functions with reordered arguments  3) script assumes this TF statement: "import tensorflow as tf"  4) the script does not reorder arguments  5) the script adds keyword arguments to functions that have their arguments reordered.  Upgrade Jupyter notebooks and Python files in a GitHub repo: http://tf2up.ml/ => Replace prefix https://github.com/ with http://tf2up.m
  • 14. What is a Tensor? TF tensors are n-dimensional arrays TF tensors are very similar to numpy ndarrays scalar number: a zeroth-order tensor vector: a first-order tensor matrix: a second-order tensor 3-dimensional array: a 3rd order tensor https://dzone.com/articles/tensorflow-simplified- examples
  • 15. TensorFlow Eager Execution An imperative interface to TF Fast debugging & immediate run-time errors => TF 2 requires Python 3.x (not Python 2.x) No static graphs or sessions
  • 16. TensorFlow Eager Execution integration with Python tools Supports dynamic models + Python control flow support for custom and higher-order gradients => Default mode in TensorFlow 2.0
  • 17. TensorFlow Eager Execution  import tensorflow as tf  x = [[2.]]  # m = tf.matmul(x, x) <= deprecated  m = tf.linalg.matmul(x, x)  print("m:",m)  print("m:",m.numpy()) #Output: #m: tf.Tensor([[4.]], shape=(1,1),dtype=float32) #m: [[4.]]
  • 18. Basic TF 2 Operations Working with Constants Working with Strings Working with Tensors Working Operators Arrays in TF 2 Multiplying Two Arrays Convert Python Arrays to TF Arrays Conflicting Types
  • 19. TensorFlow “primitive types”  tf.constant: > initialized immediately and immutable  import tensorflow as tf  aconst = tf.constant(3.0)  print(aconst) # output for TF 2: #tf.Tensor(3.0, shape=(), dtype=float32) # output for TF 1.x: #Tensor("Const:0", shape=(), dtype=float32)
  • 20. TensorFlow “primitive types” tf.Variable (a class): + initial value is required + updated during training + in-memory buffer (saved/restored from disk) + can be shared in a distributed environment + they hold learned parameters of a model
  • 21. TensorFlow Arithmetic import tensorflow as tf # arith1.py a = tf.add(4, 2) b = tf.subtract(8, 6) c = tf.multiply(a, 3) d = tf.div(a, 6)  print(a) # 6  print(b) # 2  print(c) # 18  print(d) # 1  print("a:",a.numpy())  print("b:",b.numpy())  print("c:",c.numpy())  print("d:",d.numpy())
  • 22. TensorFlow Arithmetic  tf.Tensor(6, shape=(), dtype=int32)  tf.Tensor(2, shape=(), dtype=int32)  tf.Tensor(18, shape=(), dtype=int32)  tf.Tensor(1.0, shape=(), dtype=float64)  6  2  18  1.0
  • 23. TF 2 Loops and Arrays Using “for” loops Using “while” loops tf.reduce_prod() tf.reduce_sum()
  • 24. A “for” Loop Example  import tensorflow as tf  x = tf.Variable(0, name='x')  for i in range(5):  x = x + 1  print("x:",x)
  • 25. A “for” Loop Example  x: tf.Tensor(1, shape=(), dtype=int32)  x: tf.Tensor(2, shape=(), dtype=int32)  x: tf.Tensor(3, shape=(), dtype=int32)  x: tf.Tensor(4, shape=(), dtype=int32)  x: tf.Tensor(5, shape=(), dtype=int32)
  • 26. A “while” Loop Example  import tensorflow as tf  a = tf.constant(12)  while not tf.equal(a, 1):  if tf.equal(a % 2, 0):  a = a / 2  else:  a = 3 * a + 1  print(a)
  • 27. A “while” Loop Example  tf.Tensor(6.0, shape=(), dtype=float64)  tf.Tensor(3.0, shape=(), dtype=float64)  tf.Tensor(10.0, shape=(), dtype=float64)  tf.Tensor(5.0, shape=(), dtype=float64)  tf.Tensor(16.0, shape=(), dtype=float64)  tf.Tensor(8.0, shape=(), dtype=float64)  tf.Tensor(4.0, shape=(), dtype=float64)  tf.Tensor(2.0, shape=(), dtype=float64)  tf.Tensor(1.0, shape=(), dtype=float64)
  • 28. Useful TF 2 APIs tf.shape() tf.rank() tf.range() tf.reshape() tf.ones() tf.zeros() tf.fill()  tf.random.normal()  tf.truncated_normal()
  • 29. Some TF 2 “lazy operators” map() filter() flatmap() batch() take() zip() flatten() Combined via “method chaining”
  • 30. TF 2 “lazy operators”  filter():  uses Boolean logic to "filter" the elements in an array to determine which elements satisfy the Boolean condition  map(): a projection  this operator "applies" a lambda expression to each input element  flat_map():  maps a single element of the input dataset to a Dataset of elements
  • 31. TF 2 “lazy operators”  batch(n):  processes a "batch" of n elements during each iteration  repeat(n):  repeats its input values n times  take(n):  operator "takes" n input values
  • 32. TF 2 tf.data.Dataset TF “wrapper” class around data sources Located in the tf.data.Dataset namespace Supports lambda expressions Supports lazy operators They use generators instead of iterators (TF 1.x)
  • 33. tf.data.Dataset.from_tensors()  Import tensorflow as tf  #combine the input into one element  t1 = tf.constant([[1, 2], [3, 4]])  ds1 = tf.data.Dataset.from_tensors(t1) # output: [[1, 2], [3, 4]]
  • 34. tf.data.Dataset.from_tensor_slices()  Import tensorflow as tf  #separate element for each item  t2 = tf.constant([[1, 2], [3, 4]])  ds1 = tf.data.Dataset.from_tensor_slices(t2) # output: [1, 2], [3, 4]
  • 35. TF 2 Datasets: code sample  import tensorflow as tf # tf2-dataset.py  import numpy as np x = np.arange(0, 10)  # create a dataset from a Numpy array ds = tf.data.Dataset.from_tensor_slices(x)
  • 36. TF 1.x Datasets: iterator  import tensorflow as tf # tf1x-plusone.py  import numpy as np  x = np.arange(0, 10)  # create dataset object from Numpy array  ds = tf.data.Dataset.from_tensor_slices(x)  ds.map(lambda x: x + 1)  # create a one-shot iterator <= TF 1.x  iterator = ds.make_one_shot_iterator()  for i in range(10):  val = iterator.get_next()  print("val:",val)
  • 37. TF 2 Datasets: generator import tensorflow as tf # tf2-plusone.py import numpy as np x = np.arange(0, 5) # 0, 1, 2, 3, 4 def gener(): for i in x: yield (3*i) ds = tf.data.Dataset.from_generator(gener, (tf.int64)) for value in ds.take(len(x)): print("1value:",value) for value in ds.take(2*len(x)): print("2value:",value)
  • 38. TF 2 Datasets: generator  1value: tf.Tensor(0, shape=(), dtype=int64)  1value: tf.Tensor(3, shape=(), dtype=int64)  1value: tf.Tensor(6, shape=(), dtype=int64)  1value: tf.Tensor(9, shape=(), dtype=int64)  1value: tf.Tensor(12, shape=(), dtype=int64)  2value: tf.Tensor(0, shape=(), dtype=int64)  2value: tf.Tensor(3, shape=(), dtype=int64)  2value: tf.Tensor(6, shape=(), dtype=int64)  2value: tf.Tensor(9, shape=(), dtype=int64)  2value: tf.Tensor(12, shape=(), dtype=int64)
  • 39. TF 1.x map() and take()  import tensorflow as tf # tf 1.12.0  tf.enable_eager_execution()  ds = tf.data.TextLineDataset("file.txt")  ds = ds.map(lambda line: tf.string_split([line]).values)  try:  for value in ds.take(2):  print("value:",value)  except tf.errors.OutOfRangeError:  pass
  • 40. TF 2 map() and take(): file.txt  this is file line #1  this is file line #2  this is file line #3  this is file line #4  this is file line #5
  • 41. TF 2 map() and take(): output  ('value:', <tf.Tensor: id=16, shape=(5,), dtype=string, numpy=array(['this', 'is', 'file', 'line', '#1'], dtype=object)>)  ('value:', <tf.Tensor: id=18, shape=(5,), dtype=string, numpy=array(['this', 'is', 'file', 'line', '#2'], dtype=object)>)
  • 42. TF 2 map() and take()  import tensorflow as tf  import numpy as np  x = np.array([[1],[2],[3],[4]])  # make a ds from a numpy array  ds = tf.data.Dataset.from_tensor_slices(x)  ds = ds.map(lambda x: x*2).map(lambda x: x+1).map(lambda x: x**3)  for value in ds.take(4):  print("value:",value)
  • 43. TF 2 map() and take(): output  value: tf.Tensor([27], shape=(1,), dtype=int64)  value: tf.Tensor([125], shape=(1,), dtype=int64)  value: tf.Tensor([343], shape=(1,), dtype=int64)  value: tf.Tensor([729], shape=(1,), dtype=int64)
  • 44. TF 2 batch(): EXTRA CREDIT import tensorflow as tf import numpy as np x = np.arange(0, 12) def gener(): i = 0 while(i < len(x/3)): yield (i, i+1, i+2) # three integers at a time i += 3 ds = tf.data.Dataset.from_generator(gener, (tf.int64,tf.int64,tf.int64)) third = int(len(x)/3) for value in ds.take(third): print("value:",value)
  • 45. TF 2 batch() & zip(): EXTRA CREDIT import tensorflow as tf ds1 = tf.data.Dataset.range(100) ds2 = tf.data.Dataset.range(0, -100, -1) ds3 = tf.data.Dataset.zip((ds1, ds2)) ds4 = ds3.batch(4) for value in ds.take(10): print("value:",value)
  • 46. TF 2: Working with tf.keras tf.keras.layers tf.keras.models tf.keras.optimizers tf.keras.utils tf.keras.regularizers
  • 47. TF 2: tf.keras.models  tf.keras.models.Sequential(): most common  clone_model(...): Clone any Model instance  load_model(...): Loads a model saved via save_model  model_from_config(...): Instantiates a Keras model from its config  model_from_json(...):  Parses a JSON model config file and returns a model instance  model_from_yaml(...):  Parses a yaml model config file and returns a model instance  save_model(...): Saves a model to a HDF5 file
  • 48. TF 2: tf.keras.layers  tf.keras.models.Sequential()  tf.keras.layers.Conv2D()  tf.keras.layers.MaxPooling2D()  tf.keras.layers.Flatten()  tf.keras.layers.Dense()  tf.keras.layers.Dropout(0.1)  tf.keras.layers.BatchNormalization()  tf.keras.layers.embedding()  tf.keras.layers.RNN()  tf.keras.layers.LSTM()  tf.keras.layers.Bidirectional (ex: BERT)
  • 49. TF 2: Activation Functions tf.keras.activations.relu tf.keras.activations.selu tf.keras.activations.elu tf.keras.activations.linear tf.keras.activations.sigmoid tf.keras.activations.softmax tf.keras.activations.softplus tf.keras.activations.tanh Others …
  • 50. TF 2: Built-in Datasets tf.keras.datasets.boston_housing tf.keras.datasets.cifar10 tf.keras.datasets.cifar100 tf.keras.datasets.fashion_mnist tf.keras.datasets.imdb tf.keras.datasets.mnist tf.keras.datasets.reuters
  • 51. TF 2: “Experimental”  tf.keras.experimental.CosineDecay  tf.keras.experimental.CosineDecayRestarts  tf.keras.experimental.LinearCosineDecay  tf.keras.experimental.NoisyLinearCosineDecay  tf.keras.experimental.PeepholeLSTMCell
  • 54. TF 2: tf.losses  tf.losses.BinaryCrossEntropy  tf.losses.CategoricalCrossEntropy  tf.losses.CategoricalHinge  tf.losses.CosineSimilarity  tf.losses.Hinge  tf.losses.Huber  tf.losses.KLD  tf.losses.KLDivergence  tf.losses.MAE  tf.losses.MSE  tf.losses.SparseCategoricalCrossentropy
  • 55. Other TF 2 Namespaces tf.keras.regularizers (L1 and L2) tf.keras.utils (to_categorical) tf.keras.preprocessing.text.Tokenizer
  • 56. TF 1.x Model APIs  tf.core (“reduced” in TF 2)  tf.layers (deprecated in TF 2)  tf.keras (the primary API in TF 2)  tf.estimator.Estimator (implementation in tf.keras)  tf.contrib.learn.Estimator (Deprecated in TF 2)
  • 57. Other TF 2 Features TF Probability (Linear Regression) TF Agents (Reinforcement Learning) TF Text (NLP) TF Federated TF Privacy Tensor2Tensor TF Ranking TFX
  • 58. TF 2 (Keras), DL, and RL The remaining slides briefly discuss TF 2 and:  CNNs (Convolutional Neural Networks)  RNNs (Recurrent Neural Networks)  LSTMs (Long Short Term Memory)  Autoencoders  Variational Autoencoders  Reinforcement Learning  Some Keras-based code blocks  Some useful links
  • 59. Three Hidden Layers (Classifier)
  • 60. Two Hidden Layers (Regression)
  • 62. Transition Between Layers W = weight matrix between adjacent layers x = an input vector of numbers b = a bias vector of numbers F = activation function (sigmoid, tanh, etc) Y = output vector of numbers Y = F(x*W + b)
  • 63. TF 2/Keras and MLPs  import tensorflow as tf  . . .  model = tf.keras.models.Sequential([ tf.keras.layers.Dense(10, input_dim=num_pixels, activation='relu’), tf.keras.layers.Dense(30, activation='relu’), tf.keras.layers.Dense(10, activation='relu’), tf.keras.layers.Dense(num_classes, activation='softmax’), tf.keras.optimizers.Adam(lr=0.01), loss='categorical_crossentropy', metrics=['accuracy']) ])
  • 64. TF 2/Keras and MLPs  import tensorflow as tf  . . .  model = tf.keras.models.Sequential()  model.add(tf.keras.layers.Dense(10, input_dim=num_pixels, activation='relu'))  model.add(tf.keras.layers.Dense(30, activation='relu'))  model.add(tf.keras.layers.Dense(10, activation='relu'))  model.add(tf.keras.layers.Dense(num_classes, activation='softmax'))  model.compile(tf.keras.optimizers.Adam(lr=0.01), loss='categorical_crossentropy', metrics=['accuracy'])
  • 66. TF 2/Keras and CNNs  import tensorflow as tf  . . .  model = tf.keras.models.Sequential([  tf.keras. layers.Conv2D(30,(5,5), input_shape=(28,28,1),activation='relu')),  tf.keras.layers.MaxPooling2D(pool_size=(2, 2))),  tf.keras.layers.Flatten(),  tf.keras.layers.Dense(500, activation='relu’),  tf.keras.layers.Dropout(0.5),  tf.keras.layers.Dense(num_classes, activation='softmax’)  ])
  • 67. What are RNNs? RNNs = Recurrent Neural Networks RNNs capture/retain events in time FF networks do not learn from the past => RNNs DO learn from the past RNNs contain layers of recurrent neurons
  • 68. Common Types of RNNs 1) Simple RNNs (minimal structure) 2) LSTMs (input, forget, output gates) 3) Gated Recurrent Unit (GRU) NB: RNNs can be used with MNIST data
  • 69. Some Use Cases for RNNs 1) Time Series Data 2) NLP (Natural Language Processing): Processing documents “analyzing” Trump’s tweets 3) RNNs and MNIST dataset
  • 70. Recall: Feed Forward Structure
  • 71. RNN Cell (left) and FF (right)
  • 73. RNN Neuron: same W for all steps
  • 74. RNN and MNIST Dataset Sample initialization values for RNN:  n_steps = 28 # # of time steps  n_inputs = 28 # number of rows  n_neurons = 150 # 150 neurons in one layer  n_outputs = 10 # 10 digit classes (0->9) Remember that: one layer = one memory cell MNIST class count = size of output layer
  • 75. TF 2/Keras and RNNs import tensorflow as tf ... model = tf.keras.models.Sequential() model.add(tf.keras.layers.SimpleRNN(20, input_shape=(input_length, input_dim), return_sequences=False)) ...
  • 76. Problems with RNNs lengthy training time Unrolled RNN will be very deep Propagating gradients through many layers Prone to vanishing gradient Prone to exploding gradient
  • 77. What are LSTMs? Long short-term memory (LSTM): a model for the short-term memory Contains more state than a “regular” RNN which can last for a long period of time can forget and remember things selectively Hochreiter/Schmidhuber proposed in 1997 improved in 2000 by Felix Gers' team set accuracy records in multiple apps
  • 78. Features of LSTMs Used in Google speech recognition + Alpha Go they avoid the vanishing gradient problem Can track 1000s of discrete time steps Used by international competition winners
  • 79. Use Cases for LSTMs Connected handwriting recognition Speech recognition Forecasting Anomaly detection Pattern recognition
  • 80. Advantages of LSTMs well-suited to classify/process/predict time series More powerful: increased memory (=state) Can explicitly add long-term or short-term state even with time lags of unknown size/duration between events
  • 81. The Primary LSTM Gates Input, Forget, and Output gates (FIO) The main operational block of LSTMs an "information filter" a multivariate input gate some inputs are blocked some inputs "go through" "remember" necessary information FIO gates use the sigmoid activation function The cell state uses the tanh activation function
  • 82. LSTM Gates and Cell State  1) the input gate controls: the extent to which a new value flows into the cell state  2) the forget gate controls: the extent to which a value remains in the cell state  3) the output gate controls: The portion of the cell state that’s part of the output  4) the cell state maintains long-term memory  => LSTMs store more memory than basic RNN cells
  • 83. Common LSTM Variables  # LSTM unrolled through 28 time steps (with MNIST):  time_steps = 28  # number of hidden LSTM units:  num_units = 128  # number of rows of 28 pixels:  n_inputs = 28  # number of pixels per row:  input_size = 28  # mnist has 10 classes (0-9):  n_classes = 10  # size of input batch:  batch_size = 128
  • 86. TF 2/Keras and LSTMs  import tensorflow as tf  . . . model = tf.keras.models.Sequential() model.add(tf.keras.layers.LSTMCell(6,batch_input_shape=(1,1,1),kernel_i nitializer='ones',stateful=True)) model.add(tf.keras.layers.Dense(1)) . . . => LSTM versus LSTMCell: https://stackoverflow.com/questions/48187283/whats-the-difference- between-lstm-and-lstmcell => How to define a custom LSTM cell: https://stackoverflow.com/questions/54231440/define-custom-lstm-cell-in- keras
  • 87. TF 2/Keras and LSTM/RNN/GRU  import tensorflow as tf  ...  model = Sequential()  model.add(tf.keras.layers.LSTM(64, return_sequences=True, input_shape=(10, 64)))  model.add(tf.keras.layers.SimpleRNN(32, return_sequences=True))  model.add(tf.keras.layers.GRU(10, ...)  ...
  • 88. TF 2/Keras & BiDirectional LSTMs  import tensorflow as tf  . . .  model = Sequential()  model.add(Bidirectional(LSTM(10, return_sequences=True), input_shape=(5,10)))  model.add(Bidirectional(LSTM(10)))  model.add(Dense(5))  model.add(Activation('softmax'))  model.compile(loss='categorical_crossentropy', optimizer='rmsprop')
  • 91. AEs, VAEs, and GANs  https://jaan.io/what-is-variational-autoencoder-vae-tutorial/  TensorFlow 2 and Autoencoders: https://gist.github.com/AFAgarap/326af55e36be0529c507f1599f88c06e  TensorFlow 2 and Variational Autoencoders:  Convolutional Variational Autoencoder: https://www.tensorflow.org/alpha/tutorials/generative/cvae  Variational Autoencoders and GANS: https://www.youtube.com/watch?v=KFX4apL7a4s
  • 96. Reinforcement Learning  Dopamine on TensorFlow (Research Framework): https://github.com/google/dopamine  TF-Agents library for RL in TensorFlow: https://github.com/tensorflow/agents  Keras and Reinforcement Learning: https://github.com/keras-rl/keras-rl  OpenAI universe, DeepMind Lab, TensorForce
  • 97. About Me: Recent Books  1) Python3 and Machine Learning (2020)  2) Angular 9 and Deep Learning (2020)  3) Angular 8 & Machine Learning (2020)  4) AI/ML/DL: Concepts and Code (2020)  5) Bash Programming on Mac (2020)  6) TensorFlow 2 Pocket Primer (2019)  7) TensorFlow 1.x Pocket Primer (2019)  8) Python for TensorFlow (2019)  9) C Programming Pocket Primer (2019)
  • 98. About Me: Less Recent Books  10) RegEx Pocket Primer (2018)  11) Data Cleaning Pocket Primer (2018)  12) Angular Pocket Primer (2017)  13) Android Pocket Primer (2017)  14) CSS3 Pocket Primer (2016)  15) SVG Pocket Primer (2016)  16) Python Pocket Primer (2015)  17) D3 Pocket Primer (2015)  18) HTML5 Mobile Pocket Primer (2014)
  • 99. About Me: Older Books  19) jQuery, CSS3, and HTML5 (2013)  20) HTML5 Pocket Primer (2013)  21) jQuery Pocket Primer (2013)  22) HTML5 Canvas (2012)  23) Flash on Android (2011)  24) Web 2.0 Fundamentals (2010)  25) MS Silverlight Graphics (2008)  26) Fundamentals of SVG (2003)  27) Java Graphics Library (2002)
  • 100. What I do (Training) ML/DL/RL Instructor at UCSC (Santa Clara): Machine Learning (Python) (09/09/2019) 10 weeks Deep Learning (TF2/Keras) (10/04/2019) 10 weeks Deep Learning (Keras/TF 2) (01/30/2020) 10 weeks DL, NLP, RL (Adv Topics) (TBD/2020) 10 weeks NLP (Transformers, BERT, etc) (TBD/2020) 10 weeks RL (PPO, A3C, SAC, etc) (TBD/2020) 10 weeks UCSC link: https://www.ucsc-extension.edu/certificate-program/offering/deep-learning- and-artificial-intelligence-tensorflow