SlideShare a Scribd company logo
1 of 62
Getting Started with Keras and
TensorFlow using Python
Presented by Jeff Heaton, Ph.D.
October 17, 2017 – StampedeCON: AI Summit 2017, St. Louis, MO.
Jeff Heaton, Ph.D.
• Lead Data Scientist at RGA
• Adjunct Instructor at Washington University
• Fellow of the Life Management Institute (FLMI)
• Senior Member of IEEE
• Kaggler (Expert level)
• http://www.jeffheaton.com
– (contact info at my website)
T81-558: Applications of Deep Learning
• Course Website: https://sites.wustl.edu/jeffheaton/t81-558/
• Instructor Website: https://sites.wustl.edu/jeffheaton/
• Course Videos: https://www.youtube.com/user/HeatonResearch
Presentation Outline
• Deep Learning Framework Landscape
• TensorFlow as a Compute Graph/Engine
• Keras and TensorFlow
• Keras: Classification
• Keras: Regression
• Keras: Computer Vision and CNN
• Keras: Time Series and RNN
• GPU
Deep Learning Framework Landscape
(discontinued)
Deep Learning Mindshare on GitHub
What are Tensors? Why are they flowing?
• Tensor of Rank 0 (or scaler) – simple variable
• Tensor of Rank 1 (or vector) – array/list
• Tensor of Rank 2 (or matrix) – 2D array
• Tensor of Rank 3 (or cube) – 3D array
• Tensor of Rank 4 (tesseract/hypercube) – 4D array
• Higher ranks (hypercube) – nD array
TensorFlow as a Compute Graph/Engine
What are Tensors? Why are they flowing?
• Tensor of Rank 0 (or scaler) – simple variable
• Tensor of Rank 1 (or vector) – array/list
• Tensor of Rank 2 (or matrix) – 2D array
• Tensor of Rank 3 (or cube) – 3D array
• Tensor of Rank 4 (tesseract/hypercube) – 4D array
• Higher ranks (hypercube) – nD array
What is a Computation Graph?
import tensorflow as tf
matrix1 = tf.constant([[3., 3.]])
matrix2 = tf.constant([[2.],[2.]])
product = tf.matmul(matrix1, matrix2)
with tf.Session() as sess:
result = sess.run([product])
print(result)
Computation Graph with Variables
import tensorflow as tf
sess = tf.InteractiveSession()
x = tf.Variable([1.0, 2.0])
a = tf.constant([3.0, 3.0])
x.initializer.run()
sub = tf.subtract(x, a)
print(sub.eval())
# ==> [-2. -1.]
sess.run(x.assign([4.0, 6.0]))
print(sub.eval())
# ==> [1. 3.]
Computation Graph for Mandelbrot Set
Mandelbrot Set Review
• Some point c is a complex number with x as the real part, y as
the imaginary part.
• z0 = 0
• z1 = c
• z2 = z1
2 + c
• ...
• zn+1 = zn
2 + c
Mandelbrot Rendering in TensorFlow
xs = tf.constant(Z.astype(np.complex64))
zs = tf.Variable(xs)
ns = tf.Variable(tf.zeros_like(xs, tf.float32))
tf.global_variables_initializer().run()
# Compute the new values of z: z^2 + x
zs_ = zs*zs + xs
# Have we diverged with this new value?
not_diverged = tf.abs(zs_) < 4
step = tf.group(
zs.assign(zs_),
ns.assign_add(tf.cast(not_diverged, tf.float32))
)
for i in range(200): step.run()
Keras and TensorFlow
Tools Used in this Presentation
• Anaconda Python 3.6
• Google TensorFlow 1.2
• Keras 2.0.6
• Scikit-Learn
• Jupyter Notebooks
Installing These Tools
• Install Anaconda Python 3.6
• Then run the following:
– conda install scipy
– pip install sklearn
– pip install pandas
– pip install pandas-datareader
– pip install matplotlib
– pip install pillow
– pip install requests
– pip install h5py
– pip install tensorflow==1.2.1
– pip install keras==2.0.6
Keras and TensorFlow
Keras TF Learn
TensorFlow
Compute Graphs
CPU GPU
Theano
Compute Graphs
MXNet
Compute Graphs
Anatomy of a Neural Network
• Input Layer – Maps inputs to
the neural network.
• Hidden Layer(s) – Helps form
prediction.
• Output Layer – Provides
prediction based on inputs.
• Context Layer – Holds state
between calls to the neural
network for predictions.
• Deep learning is almost always applied to neural networks.
• A deep neural network has more than 2 hidden layers.
• Deep neural networks have existed as long as traditional neural
networks.
– We just did not have a way to train deep neural networks.
– Hinton (et al.) introduced a means to train deep belief neural networks in
2006.
• Neural networks have risen three times and fallen twice in their
history. Currently, they are on the rise.
What is Deep Learning
The True Believers – Luminaries of Deep Learning
• From left to right:
• Yann LeCun
• Geoffrey Hinton
• Yoshua Bengio
• Andrew Ng
• Deep neural networks often accomplish the same task as other
models, such as:
– Support Vector Machines
– Random Forests
– Gradient Boosted Machines
• For many problems deep learning will give a less accurate
answer than the other models.
• However, for certain problems, deep neural networks perform
considerably better than other models.
Why Use Deep Learning
Why Deep Learning (high y-axis is good)
Supervised or Unsupervised?
Supervised Machine Learning
• Usually classification or regression.
• For an input, the correct output is
provided.
• Examples of supervised learning:
– Propensity to buy
– Credit scoring
Unsupervised Machine Learning
• Usually clustering.
• Inputs analyzed without any
specification of a correct output.
• Examples of unsupervised learning:
– Clustering
– Dimension reduction
Types of Machine Learning Algorithm
• Clustering: Group records together that have similar field values.
For example, customers with common attributes in a propensity
to buy model.
• Regression: Learn to predict a numeric outcome field, based on
all of the other fields present in each record. For example,
predict the amount of coverage a potential customer might buy.
• Classification: Learn to predict a non-numeric outcome field. For
example, learn the type of policy an existing customer has a
potential of buying next.
Problems that Deep Learning is Well Suited to
Keras: Classification
• Classic classification problem.
• 150 rows with 4 predictor columns.
• All 150 rows are labeled as a species of iris.
• Three different iris species.
• Created by Sir Ronald Fisher in 1936.
• Predictors:
– Petal length
– Petal width
– Sepal length
– Sepal width
The Classic Iris Dataset
The Classic Iris Dataset
sepal_length sepal_width petal_length petal_width class
5.1 3.5 1.4 0.2 Iris-setosa
7.0 3.2 4.7 1.4 Iris-versico
6.3 3.3 6.0 2.5 Iris-virginica
6.4 3.2 4.5 1.5 Iris-versicolor
5.8 2.7 5.1 1.9 Iris-virginica
4.9 3.0 1.4 0.2 Iris-setosa
… … … … …
Are the Iris Data Predictive?
Keras Classification: Load and Train/Test Split
path = "./data/"
filename = os.path.join(path,"iris.csv")
df = pd.read_csv(filename,na_values=['NA','?'])
species = encode_text_index(df,"species")
x,y = to_xy(df,"species")
# Split into train/test
x_train, x_test, y_train, y_test = train_test_split(
x, y, test_size=0.25, random_state=42)
Keras Classification: Build NN and Fit
model = Sequential()
model.add(Dense(10, input_dim=x.shape[1],
kernel_initializer='normal', activation='relu'))
model.add(Dense(1, kernel_initializer='normal'))
model.add(Dense(y.shape[1],activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam')
monitor = EarlyStopping(monitor='val_loss', min_delta=1e-3,
patience=5, verbose=1, mode='auto')
model.fit(x,y,validation_data=(x_test,y_test),callbacks=[monitor],ve
rbose=2,epochs=1000)
Keras Classification: Build NN and Fit
# Evaluate success using accuracy
# raw probabilities to chosen class (highest probability)
pred = model.predict(x_test)
pred = np.argmax(pred,axis=1)
y_compare = np.argmax(y_test,axis=1)
score = metrics.accuracy_score(y_compare, pred)
print("Accuracy score: {}".format(score))
Keras: Regression
• Classic regression problem.
• Target: mpg
• Predictors:
– cylinders
– displacement
– horsepower
– weight
– acceleration
– year
– origin
– name
Predict a Car’s Miles Per Gallon (MPG)
Predict a Car’s Miles Per Gallon (MPG)
mpg cylinders
displace
ment
horsepow
er weight
accelerati
on year origin name
18 8 307 130 3504 12 70 1
chevrolet
chevelle
malibu
15 8 350 165 3693 11.5 70 1
buick
skylark
320
18 8 318 150 3436 11 70 1
plymouth
satellite
16 8 304 150 3433 12 70 1
amc rebel
sst
17 8 302 140 3449 10.5 70 1 ford torino
15 8 429 198 4341 10 70 1
ford
galaxie
500
14 8 454 220 4354 9 70 1
chevrolet
impala
• Models such as GBM or Neural Network can predict the MPG to
around +/-2.7 accuracy.
• Result of regression can be given in equation form (though not as
accurate as a model):
Regression Models - MPG
Keras Regression: Load and Train/Test Split
path = "./data/"
filename_read = os.path.join(path,"auto-mpg.csv")
df = pd.read_csv(filename_read,na_values=['NA','?'])
cars = df['name']
df.drop('name',1,inplace=True)
missing_median(df, 'horsepower')
x,y = to_xy(df,"mpg")
Keras Regression: Build and Fit
model = Sequential()
model.add(Dense(10, input_dim=x.shape[1],
kernel_initializer='normal', activation='relu'))
model.add(Dense(1, kernel_initializer='normal'))
model.compile(loss='mean_squared_error', optimizer='adam')
monitor = EarlyStopping(monitor='val_loss', min_delta=1e-3,
patience=5, verbose=1, mode='auto')
model.fit(x,y,validation_data=(x_test,y_test),callbacks=[monitor],ve
rbose=2,epochs=1000)
Keras Regression: Predict and Evaluate
# Predict
pred = model.predict(x_test)
# Measure RMSE error. RMSE is common for regression.
score = np.sqrt(metrics.mean_squared_error(pred,y_test))
print("Final score (RMSE): {}".format(score))
• The iris and MPG datasets are nicely formatted.
• Real world data is a complex mix of XML, JSON, textual formats,
binary formats, and web service accessed content (the variety V
in “Big Data”).
• More complex security data will be presented later in this talk.
Preparing Data for Predictive Modeling is Hard
Keras: Computer Vision and CNN
• We will usually use classification, though regression is still an
option.
• The input to the neural network is now 3D (height, width, color).
• Data are not transformed, no zscores or dummy variables.
• Processing time is usually much longer.
• We now have different layer types: dense layers (just like before),
convolution layers and max pooling layers.
• Data will no longer arrive as CSV files. TensorFlow provides
some utilities for going directly from image to the input of a neural
network.
Predicting Images: What is Different?
Sources of Image Data: CIFAR10 and CIFAR100
Sources of Image Data: ImageNet
Sources of Training Data: The MNIST Data Set
Recognizing Digits
Convolutional Neural Networks (CNN)
A LeNET-5/CNN Network (LeCun, 1998)
Dense Layers - Fully connected layers.
Convolution Layers - Used to scan across images.
Max Pooling Layers - Used to downsample images.
Dropout Layer - Used to add regularization.
Loading the Digits
(x_train, y_train), (x_test, y_test) = mnist.load_data()
print("Shape of x_train: {}".format(x_train.shape))
print("Shape of y_train: {}".format(y_train.shape))
print()
print("Shape of x_test: {}".format(x_test.shape))
print("Shape of y_test: {}".format(y_test.shape))
Shape of x_train: (60000, 28, 28)
Shape of y_train: (60000,)
Shape of x_test: (10000, 28, 28)
Shape of y_test: (10000,)
Display a Digit
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
digit = 101 # Change to choose new digit
a = x_train[digit]
plt.imshow(a, cmap='gray', interpolation='nearest')
print("Image (#{}): Which is digit '{}'".
format(digit,y_train[digit]))
Build the CNN Network
model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3),
activation='relu',
input_shape=input_shape))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes, activation='softmax'))
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adadelta(),
metrics=['accuracy'])
Fit and Evaluate
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
verbose=2,
validation_data=(x_test, y_test))
score = model.evaluate(x_test, y_test, verbose=0)
print('Test loss: {}'.format(score[0]))
print('Test accuracy: {}'.format(score[1]))
Test loss: 0.03047790436172363
Test accuracy: 0.9902
Elapsed time: 1:30:40.79 (for CPU, approx 30 min GPU)
Keras: Time Series and RNN
• RNN = Recurrent Neural Network.
• LSTM = Long Short Term Memory.
• Most models will always produce the same output for the same input.
• Previous input does not matter to a non-recurrent neural network.
• To convert today’s temperature from Fahrenheit to Celsius, the value
of yesterdays temperature does not matter.
• To predict tomorrow’s closing price for a stock you need more than just
today’s price.
• To determine if a packet is part of an attack, previous packets must be
considered.
How is a RNN Different?
• The LSTM units in a deep neural network are short-term memory.
• This short term memory is governed by 3 gates:
– Input Gate: When do we remember?
– Output Gate: When do we act?
– Forget Gate: When do we forget?
How do LSTM’s Work?
Sample Recurrent Data: Stock Price & Volume
x = [
[[32,1383],[41,2928],[39,8823],[20,1252],[15,1532]],
[[35,8272],[32,1383],[41,2928],[39,8823],[20,1252]],
[[37,2738],[35,8272],[32,1383],[41,2928],[39,8823]],
[[34,2845],[37,2738],[35,8272],[32,1383],[41,2928]],
[[32,2345],[34,2845],[37,2738],[35,8272],[32,1383]],
]
y = [
1,
-1,
0,
-1,
1
]
LSTM Example
max_features = 4 # 0,1,2,3 (total of 4)
x = [
[[0],[1],[1],[0],[0],[0]],
[[0],[0],[0],[2],[2],[0]],
[[0],[0],[0],[0],[3],[3]],
[[0],[2],[2],[0],[0],[0]],
[[0],[0],[3],[3],[0],[0]],
[[0],[0],[0],[0],[1],[1]]
]
x = np.array(x,dtype=np.float32)
y = np.array([1,2,3,2,3,1],dtype=np.int32)
Build a LSTM
model = Sequential()
model.add(LSTM(128, dropout=0.2, recurrent_dropout=0.2,
input_dim=1))
model.add(Dense(4, activation='sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
Test the LSTM
def runit(model, inp):
inp = np.array(inp,dtype=np.float32)
pred = model.predict(inp)
return np.argmax(pred[0])
print( runit( model, [[[0],[0],[0],[0],[3],[3]]] ))
3
print( runit( model, [[[4],[4],[0],[0],[0],[0]]] ))
4
GPU’s and Deep Learning
Low Level GPU Frameworks
• CUDA
CUDA is NVidia's low-level GPGPU framework.
• OpenCL
An open framework supporting CPU’s, GPU’s and other
devices. Managed by the Khronos Group.
Thank you!
• Jeff Heaton
• http://www.jeffheaton.com

More Related Content

What's hot

Intel Nervana Artificial Intelligence Meetup 11/30/16
Intel Nervana Artificial Intelligence Meetup 11/30/16Intel Nervana Artificial Intelligence Meetup 11/30/16
Intel Nervana Artificial Intelligence Meetup 11/30/16Intel Nervana
 
Distributed Deep Learning on AWS with Apache MXNet
Distributed Deep Learning on AWS with Apache MXNetDistributed Deep Learning on AWS with Apache MXNet
Distributed Deep Learning on AWS with Apache MXNetAmazon Web Services
 
Keras: Deep Learning Library for Python
Keras: Deep Learning Library for PythonKeras: Deep Learning Library for Python
Keras: Deep Learning Library for PythonRafi Khan
 
Introduction to deep learning @ Startup.ML by Andres Rodriguez
Introduction to deep learning @ Startup.ML by Andres RodriguezIntroduction to deep learning @ Startup.ML by Andres Rodriguez
Introduction to deep learning @ Startup.ML by Andres RodriguezIntel Nervana
 
Hussein Mehanna, Engineering Director, ML Core - Facebook at MLconf ATL 2016
Hussein Mehanna, Engineering Director, ML Core - Facebook at MLconf ATL 2016Hussein Mehanna, Engineering Director, ML Core - Facebook at MLconf ATL 2016
Hussein Mehanna, Engineering Director, ML Core - Facebook at MLconf ATL 2016MLconf
 
Deep Learning with Microsoft R Open
Deep Learning with Microsoft R OpenDeep Learning with Microsoft R Open
Deep Learning with Microsoft R OpenPoo Kuan Hoong
 
Erin LeDell, Machine Learning Scientist, H2O.ai at MLconf ATL 2016
Erin LeDell, Machine Learning Scientist, H2O.ai at MLconf ATL 2016Erin LeDell, Machine Learning Scientist, H2O.ai at MLconf ATL 2016
Erin LeDell, Machine Learning Scientist, H2O.ai at MLconf ATL 2016MLconf
 
Avi Pfeffer, Principal Scientist, Charles River Analytics at MLconf SEA - 5/2...
Avi Pfeffer, Principal Scientist, Charles River Analytics at MLconf SEA - 5/2...Avi Pfeffer, Principal Scientist, Charles River Analytics at MLconf SEA - 5/2...
Avi Pfeffer, Principal Scientist, Charles River Analytics at MLconf SEA - 5/2...MLconf
 
AI powered emotion recognition: From Inception to Production - Global AI Conf...
AI powered emotion recognition: From Inception to Production - Global AI Conf...AI powered emotion recognition: From Inception to Production - Global AI Conf...
AI powered emotion recognition: From Inception to Production - Global AI Conf...Apache MXNet
 
Kaz Sato, Evangelist, Google at MLconf ATL 2016
Kaz Sato, Evangelist, Google at MLconf ATL 2016Kaz Sato, Evangelist, Google at MLconf ATL 2016
Kaz Sato, Evangelist, Google at MLconf ATL 2016MLconf
 
Large Scale Deep Learning with TensorFlow
Large Scale Deep Learning with TensorFlow Large Scale Deep Learning with TensorFlow
Large Scale Deep Learning with TensorFlow Jen Aman
 
Rajat Monga, Engineering Director, TensorFlow, Google at MLconf 2016
Rajat Monga, Engineering Director, TensorFlow, Google at MLconf 2016Rajat Monga, Engineering Director, TensorFlow, Google at MLconf 2016
Rajat Monga, Engineering Director, TensorFlow, Google at MLconf 2016MLconf
 
Improving Hardware Efficiency for DNN Applications
Improving Hardware Efficiency for DNN ApplicationsImproving Hardware Efficiency for DNN Applications
Improving Hardware Efficiency for DNN ApplicationsChester Chen
 
Mastering Computer Vision Problems with State-of-the-art Deep Learning
Mastering Computer Vision Problems with State-of-the-art Deep LearningMastering Computer Vision Problems with State-of-the-art Deep Learning
Mastering Computer Vision Problems with State-of-the-art Deep LearningMiguel González-Fierro
 
Deep Learning at Scale
Deep Learning at ScaleDeep Learning at Scale
Deep Learning at ScaleIntel Nervana
 
Apache MXNet ODSC West 2018
Apache MXNet ODSC West 2018Apache MXNet ODSC West 2018
Apache MXNet ODSC West 2018Apache MXNet
 
(BDT311) Deep Learning: Going Beyond Machine Learning
(BDT311) Deep Learning: Going Beyond Machine Learning(BDT311) Deep Learning: Going Beyond Machine Learning
(BDT311) Deep Learning: Going Beyond Machine LearningAmazon Web Services
 

What's hot (20)

Intel Nervana Artificial Intelligence Meetup 11/30/16
Intel Nervana Artificial Intelligence Meetup 11/30/16Intel Nervana Artificial Intelligence Meetup 11/30/16
Intel Nervana Artificial Intelligence Meetup 11/30/16
 
Distributed Deep Learning on AWS with Apache MXNet
Distributed Deep Learning on AWS with Apache MXNetDistributed Deep Learning on AWS with Apache MXNet
Distributed Deep Learning on AWS with Apache MXNet
 
Keras: Deep Learning Library for Python
Keras: Deep Learning Library for PythonKeras: Deep Learning Library for Python
Keras: Deep Learning Library for Python
 
Deep Domain
Deep DomainDeep Domain
Deep Domain
 
Introduction to deep learning @ Startup.ML by Andres Rodriguez
Introduction to deep learning @ Startup.ML by Andres RodriguezIntroduction to deep learning @ Startup.ML by Andres Rodriguez
Introduction to deep learning @ Startup.ML by Andres Rodriguez
 
Hussein Mehanna, Engineering Director, ML Core - Facebook at MLconf ATL 2016
Hussein Mehanna, Engineering Director, ML Core - Facebook at MLconf ATL 2016Hussein Mehanna, Engineering Director, ML Core - Facebook at MLconf ATL 2016
Hussein Mehanna, Engineering Director, ML Core - Facebook at MLconf ATL 2016
 
Deep Learning with Microsoft R Open
Deep Learning with Microsoft R OpenDeep Learning with Microsoft R Open
Deep Learning with Microsoft R Open
 
Erin LeDell, Machine Learning Scientist, H2O.ai at MLconf ATL 2016
Erin LeDell, Machine Learning Scientist, H2O.ai at MLconf ATL 2016Erin LeDell, Machine Learning Scientist, H2O.ai at MLconf ATL 2016
Erin LeDell, Machine Learning Scientist, H2O.ai at MLconf ATL 2016
 
Avi Pfeffer, Principal Scientist, Charles River Analytics at MLconf SEA - 5/2...
Avi Pfeffer, Principal Scientist, Charles River Analytics at MLconf SEA - 5/2...Avi Pfeffer, Principal Scientist, Charles River Analytics at MLconf SEA - 5/2...
Avi Pfeffer, Principal Scientist, Charles River Analytics at MLconf SEA - 5/2...
 
AI powered emotion recognition: From Inception to Production - Global AI Conf...
AI powered emotion recognition: From Inception to Production - Global AI Conf...AI powered emotion recognition: From Inception to Production - Global AI Conf...
AI powered emotion recognition: From Inception to Production - Global AI Conf...
 
Kaz Sato, Evangelist, Google at MLconf ATL 2016
Kaz Sato, Evangelist, Google at MLconf ATL 2016Kaz Sato, Evangelist, Google at MLconf ATL 2016
Kaz Sato, Evangelist, Google at MLconf ATL 2016
 
Large Scale Deep Learning with TensorFlow
Large Scale Deep Learning with TensorFlow Large Scale Deep Learning with TensorFlow
Large Scale Deep Learning with TensorFlow
 
Practical Deep Learning
Practical Deep LearningPractical Deep Learning
Practical Deep Learning
 
Android and Deep Learning
Android and Deep LearningAndroid and Deep Learning
Android and Deep Learning
 
Rajat Monga, Engineering Director, TensorFlow, Google at MLconf 2016
Rajat Monga, Engineering Director, TensorFlow, Google at MLconf 2016Rajat Monga, Engineering Director, TensorFlow, Google at MLconf 2016
Rajat Monga, Engineering Director, TensorFlow, Google at MLconf 2016
 
Improving Hardware Efficiency for DNN Applications
Improving Hardware Efficiency for DNN ApplicationsImproving Hardware Efficiency for DNN Applications
Improving Hardware Efficiency for DNN Applications
 
Mastering Computer Vision Problems with State-of-the-art Deep Learning
Mastering Computer Vision Problems with State-of-the-art Deep LearningMastering Computer Vision Problems with State-of-the-art Deep Learning
Mastering Computer Vision Problems with State-of-the-art Deep Learning
 
Deep Learning at Scale
Deep Learning at ScaleDeep Learning at Scale
Deep Learning at Scale
 
Apache MXNet ODSC West 2018
Apache MXNet ODSC West 2018Apache MXNet ODSC West 2018
Apache MXNet ODSC West 2018
 
(BDT311) Deep Learning: Going Beyond Machine Learning
(BDT311) Deep Learning: Going Beyond Machine Learning(BDT311) Deep Learning: Going Beyond Machine Learning
(BDT311) Deep Learning: Going Beyond Machine Learning
 

Similar to Getting Started with Keras and TensorFlow - StampedeCon AI Summit 2017

Computer Vision for Beginners
Computer Vision for BeginnersComputer Vision for Beginners
Computer Vision for BeginnersSanghamitra Deb
 
rsec2a-2016-jheaton-morning
rsec2a-2016-jheaton-morningrsec2a-2016-jheaton-morning
rsec2a-2016-jheaton-morningJeff Heaton
 
Deep Dive on Deep Learning (June 2018)
Deep Dive on Deep Learning (June 2018)Deep Dive on Deep Learning (June 2018)
Deep Dive on Deep Learning (June 2018)Julien SIMON
 
A Tale of Three Deep Learning Frameworks: TensorFlow, Keras, & PyTorch with B...
A Tale of Three Deep Learning Frameworks: TensorFlow, Keras, & PyTorch with B...A Tale of Three Deep Learning Frameworks: TensorFlow, Keras, & PyTorch with B...
A Tale of Three Deep Learning Frameworks: TensorFlow, Keras, & PyTorch with B...Databricks
 
B4UConference_machine learning_deeplearning
B4UConference_machine learning_deeplearningB4UConference_machine learning_deeplearning
B4UConference_machine learning_deeplearningHoa Le
 
Introduction to Neural Networks in Tensorflow
Introduction to Neural Networks in TensorflowIntroduction to Neural Networks in Tensorflow
Introduction to Neural Networks in TensorflowNicholas McClure
 
EssentialsOfMachineLearning.pdf
EssentialsOfMachineLearning.pdfEssentialsOfMachineLearning.pdf
EssentialsOfMachineLearning.pdfAnkita Tiwari
 
NVIDIA 深度學習教育機構 (DLI): Image segmentation with tensorflow
NVIDIA 深度學習教育機構 (DLI): Image segmentation with tensorflowNVIDIA 深度學習教育機構 (DLI): Image segmentation with tensorflow
NVIDIA 深度學習教育機構 (DLI): Image segmentation with tensorflowNVIDIA Taiwan
 
slide-keras-tf.pptx
slide-keras-tf.pptxslide-keras-tf.pptx
slide-keras-tf.pptxRithikRaj25
 
DeepLearningLecture.pptx
DeepLearningLecture.pptxDeepLearningLecture.pptx
DeepLearningLecture.pptxssuserf07225
 
Anirudh Koul. 30 Golden Rules of Deep Learning Performance
Anirudh Koul. 30 Golden Rules of Deep Learning PerformanceAnirudh Koul. 30 Golden Rules of Deep Learning Performance
Anirudh Koul. 30 Golden Rules of Deep Learning PerformanceLviv Startup Club
 
Building an ML Platform with Ray and MLflow
Building an ML Platform with Ray and MLflowBuilding an ML Platform with Ray and MLflow
Building an ML Platform with Ray and MLflowDatabricks
 
Deep learning with Keras
Deep learning with KerasDeep learning with Keras
Deep learning with KerasQuantUniversity
 
Machine Learning, Deep Learning and Data Analysis Introduction
Machine Learning, Deep Learning and Data Analysis IntroductionMachine Learning, Deep Learning and Data Analysis Introduction
Machine Learning, Deep Learning and Data Analysis IntroductionTe-Yen Liu
 
Language translation with Deep Learning (RNN) with TensorFlow
Language translation with Deep Learning (RNN) with TensorFlowLanguage translation with Deep Learning (RNN) with TensorFlow
Language translation with Deep Learning (RNN) with TensorFlowS N
 
Separating Hype from Reality in Deep Learning with Sameer Farooqui
 Separating Hype from Reality in Deep Learning with Sameer Farooqui Separating Hype from Reality in Deep Learning with Sameer Farooqui
Separating Hype from Reality in Deep Learning with Sameer FarooquiDatabricks
 
Introduction to Convolutional Neural Networks
Introduction to Convolutional Neural NetworksIntroduction to Convolutional Neural Networks
Introduction to Convolutional Neural NetworksHannes Hapke
 

Similar to Getting Started with Keras and TensorFlow - StampedeCon AI Summit 2017 (20)

Computer Vision for Beginners
Computer Vision for BeginnersComputer Vision for Beginners
Computer Vision for Beginners
 
rsec2a-2016-jheaton-morning
rsec2a-2016-jheaton-morningrsec2a-2016-jheaton-morning
rsec2a-2016-jheaton-morning
 
Deep Dive on Deep Learning (June 2018)
Deep Dive on Deep Learning (June 2018)Deep Dive on Deep Learning (June 2018)
Deep Dive on Deep Learning (June 2018)
 
A Tale of Three Deep Learning Frameworks: TensorFlow, Keras, & PyTorch with B...
A Tale of Three Deep Learning Frameworks: TensorFlow, Keras, & PyTorch with B...A Tale of Three Deep Learning Frameworks: TensorFlow, Keras, & PyTorch with B...
A Tale of Three Deep Learning Frameworks: TensorFlow, Keras, & PyTorch with B...
 
B4UConference_machine learning_deeplearning
B4UConference_machine learning_deeplearningB4UConference_machine learning_deeplearning
B4UConference_machine learning_deeplearning
 
Introduction to Neural Networks in Tensorflow
Introduction to Neural Networks in TensorflowIntroduction to Neural Networks in Tensorflow
Introduction to Neural Networks in Tensorflow
 
AI and Deep Learning
AI and Deep Learning AI and Deep Learning
AI and Deep Learning
 
EssentialsOfMachineLearning.pdf
EssentialsOfMachineLearning.pdfEssentialsOfMachineLearning.pdf
EssentialsOfMachineLearning.pdf
 
NVIDIA 深度學習教育機構 (DLI): Image segmentation with tensorflow
NVIDIA 深度學習教育機構 (DLI): Image segmentation with tensorflowNVIDIA 深度學習教育機構 (DLI): Image segmentation with tensorflow
NVIDIA 深度學習教育機構 (DLI): Image segmentation with tensorflow
 
slide-keras-tf.pptx
slide-keras-tf.pptxslide-keras-tf.pptx
slide-keras-tf.pptx
 
DeepLearningLecture.pptx
DeepLearningLecture.pptxDeepLearningLecture.pptx
DeepLearningLecture.pptx
 
Anirudh Koul. 30 Golden Rules of Deep Learning Performance
Anirudh Koul. 30 Golden Rules of Deep Learning PerformanceAnirudh Koul. 30 Golden Rules of Deep Learning Performance
Anirudh Koul. 30 Golden Rules of Deep Learning Performance
 
Building an ML Platform with Ray and MLflow
Building an ML Platform with Ray and MLflowBuilding an ML Platform with Ray and MLflow
Building an ML Platform with Ray and MLflow
 
Deep learning with Keras
Deep learning with KerasDeep learning with Keras
Deep learning with Keras
 
Machine Learning, Deep Learning and Data Analysis Introduction
Machine Learning, Deep Learning and Data Analysis IntroductionMachine Learning, Deep Learning and Data Analysis Introduction
Machine Learning, Deep Learning and Data Analysis Introduction
 
Language translation with Deep Learning (RNN) with TensorFlow
Language translation with Deep Learning (RNN) with TensorFlowLanguage translation with Deep Learning (RNN) with TensorFlow
Language translation with Deep Learning (RNN) with TensorFlow
 
Decision Tree.pptx
Decision Tree.pptxDecision Tree.pptx
Decision Tree.pptx
 
Separating Hype from Reality in Deep Learning with Sameer Farooqui
 Separating Hype from Reality in Deep Learning with Sameer Farooqui Separating Hype from Reality in Deep Learning with Sameer Farooqui
Separating Hype from Reality in Deep Learning with Sameer Farooqui
 
AutoML lectures (ACDL 2019)
AutoML lectures (ACDL 2019)AutoML lectures (ACDL 2019)
AutoML lectures (ACDL 2019)
 
Introduction to Convolutional Neural Networks
Introduction to Convolutional Neural NetworksIntroduction to Convolutional Neural Networks
Introduction to Convolutional Neural Networks
 

More from StampedeCon

Why Should We Trust You-Interpretability of Deep Neural Networks - StampedeCo...
Why Should We Trust You-Interpretability of Deep Neural Networks - StampedeCo...Why Should We Trust You-Interpretability of Deep Neural Networks - StampedeCo...
Why Should We Trust You-Interpretability of Deep Neural Networks - StampedeCo...StampedeCon
 
The Search for a New Visual Search Beyond Language - StampedeCon AI Summit 2017
The Search for a New Visual Search Beyond Language - StampedeCon AI Summit 2017The Search for a New Visual Search Beyond Language - StampedeCon AI Summit 2017
The Search for a New Visual Search Beyond Language - StampedeCon AI Summit 2017StampedeCon
 
Predicting Outcomes When Your Outcomes are Graphs - StampedeCon AI Summit 2017
Predicting Outcomes When Your Outcomes are Graphs - StampedeCon AI Summit 2017Predicting Outcomes When Your Outcomes are Graphs - StampedeCon AI Summit 2017
Predicting Outcomes When Your Outcomes are Graphs - StampedeCon AI Summit 2017StampedeCon
 
Novel Semi-supervised Probabilistic ML Approach to SNP Variant Calling - Stam...
Novel Semi-supervised Probabilistic ML Approach to SNP Variant Calling - Stam...Novel Semi-supervised Probabilistic ML Approach to SNP Variant Calling - Stam...
Novel Semi-supervised Probabilistic ML Approach to SNP Variant Calling - Stam...StampedeCon
 
How to Talk about AI to Non-analaysts - Stampedecon AI Summit 2017
How to Talk about AI to Non-analaysts - Stampedecon AI Summit 2017How to Talk about AI to Non-analaysts - Stampedecon AI Summit 2017
How to Talk about AI to Non-analaysts - Stampedecon AI Summit 2017StampedeCon
 
Foundations of Machine Learning - StampedeCon AI Summit 2017
Foundations of Machine Learning - StampedeCon AI Summit 2017Foundations of Machine Learning - StampedeCon AI Summit 2017
Foundations of Machine Learning - StampedeCon AI Summit 2017StampedeCon
 
Don't Start from Scratch: Transfer Learning for Novel Computer Vision Problem...
Don't Start from Scratch: Transfer Learning for Novel Computer Vision Problem...Don't Start from Scratch: Transfer Learning for Novel Computer Vision Problem...
Don't Start from Scratch: Transfer Learning for Novel Computer Vision Problem...StampedeCon
 
Bringing the Whole Elephant Into View Can Cognitive Systems Bring Real Soluti...
Bringing the Whole Elephant Into View Can Cognitive Systems Bring Real Soluti...Bringing the Whole Elephant Into View Can Cognitive Systems Bring Real Soluti...
Bringing the Whole Elephant Into View Can Cognitive Systems Bring Real Soluti...StampedeCon
 
Automated AI The Next Frontier in Analytics - StampedeCon AI Summit 2017
Automated AI The Next Frontier in Analytics - StampedeCon AI Summit 2017Automated AI The Next Frontier in Analytics - StampedeCon AI Summit 2017
Automated AI The Next Frontier in Analytics - StampedeCon AI Summit 2017StampedeCon
 
AI in the Enterprise: Past, Present & Future - StampedeCon AI Summit 2017
AI in the Enterprise: Past,  Present &  Future - StampedeCon AI Summit 2017AI in the Enterprise: Past,  Present &  Future - StampedeCon AI Summit 2017
AI in the Enterprise: Past, Present & Future - StampedeCon AI Summit 2017StampedeCon
 
A Different Data Science Approach - StampedeCon AI Summit 2017
A Different Data Science Approach - StampedeCon AI Summit 2017A Different Data Science Approach - StampedeCon AI Summit 2017
A Different Data Science Approach - StampedeCon AI Summit 2017StampedeCon
 
Graph in Customer 360 - StampedeCon Big Data Conference 2017
Graph in Customer 360 - StampedeCon Big Data Conference 2017Graph in Customer 360 - StampedeCon Big Data Conference 2017
Graph in Customer 360 - StampedeCon Big Data Conference 2017StampedeCon
 
End-to-end Big Data Projects with Python - StampedeCon Big Data Conference 2017
End-to-end Big Data Projects with Python - StampedeCon Big Data Conference 2017End-to-end Big Data Projects with Python - StampedeCon Big Data Conference 2017
End-to-end Big Data Projects with Python - StampedeCon Big Data Conference 2017StampedeCon
 
Doing Big Data Using Amazon's Analogs - StampedeCon Big Data Conference 2017
Doing Big Data Using Amazon's Analogs - StampedeCon Big Data Conference 2017Doing Big Data Using Amazon's Analogs - StampedeCon Big Data Conference 2017
Doing Big Data Using Amazon's Analogs - StampedeCon Big Data Conference 2017StampedeCon
 
Enabling New Business Capabilities with Cloud-based Streaming Data Architectu...
Enabling New Business Capabilities with Cloud-based Streaming Data Architectu...Enabling New Business Capabilities with Cloud-based Streaming Data Architectu...
Enabling New Business Capabilities with Cloud-based Streaming Data Architectu...StampedeCon
 
Big Data Meets IoT: Lessons From the Cloud on Polling, Collecting, and Analyz...
Big Data Meets IoT: Lessons From the Cloud on Polling, Collecting, and Analyz...Big Data Meets IoT: Lessons From the Cloud on Polling, Collecting, and Analyz...
Big Data Meets IoT: Lessons From the Cloud on Polling, Collecting, and Analyz...StampedeCon
 
Innovation in the Data Warehouse - StampedeCon 2016
Innovation in the Data Warehouse - StampedeCon 2016Innovation in the Data Warehouse - StampedeCon 2016
Innovation in the Data Warehouse - StampedeCon 2016StampedeCon
 
Creating a Data Driven Organization - StampedeCon 2016
Creating a Data Driven Organization - StampedeCon 2016Creating a Data Driven Organization - StampedeCon 2016
Creating a Data Driven Organization - StampedeCon 2016StampedeCon
 
Using The Internet of Things for Population Health Management - StampedeCon 2016
Using The Internet of Things for Population Health Management - StampedeCon 2016Using The Internet of Things for Population Health Management - StampedeCon 2016
Using The Internet of Things for Population Health Management - StampedeCon 2016StampedeCon
 
Turn Data Into Actionable Insights - StampedeCon 2016
Turn Data Into Actionable Insights - StampedeCon 2016Turn Data Into Actionable Insights - StampedeCon 2016
Turn Data Into Actionable Insights - StampedeCon 2016StampedeCon
 

More from StampedeCon (20)

Why Should We Trust You-Interpretability of Deep Neural Networks - StampedeCo...
Why Should We Trust You-Interpretability of Deep Neural Networks - StampedeCo...Why Should We Trust You-Interpretability of Deep Neural Networks - StampedeCo...
Why Should We Trust You-Interpretability of Deep Neural Networks - StampedeCo...
 
The Search for a New Visual Search Beyond Language - StampedeCon AI Summit 2017
The Search for a New Visual Search Beyond Language - StampedeCon AI Summit 2017The Search for a New Visual Search Beyond Language - StampedeCon AI Summit 2017
The Search for a New Visual Search Beyond Language - StampedeCon AI Summit 2017
 
Predicting Outcomes When Your Outcomes are Graphs - StampedeCon AI Summit 2017
Predicting Outcomes When Your Outcomes are Graphs - StampedeCon AI Summit 2017Predicting Outcomes When Your Outcomes are Graphs - StampedeCon AI Summit 2017
Predicting Outcomes When Your Outcomes are Graphs - StampedeCon AI Summit 2017
 
Novel Semi-supervised Probabilistic ML Approach to SNP Variant Calling - Stam...
Novel Semi-supervised Probabilistic ML Approach to SNP Variant Calling - Stam...Novel Semi-supervised Probabilistic ML Approach to SNP Variant Calling - Stam...
Novel Semi-supervised Probabilistic ML Approach to SNP Variant Calling - Stam...
 
How to Talk about AI to Non-analaysts - Stampedecon AI Summit 2017
How to Talk about AI to Non-analaysts - Stampedecon AI Summit 2017How to Talk about AI to Non-analaysts - Stampedecon AI Summit 2017
How to Talk about AI to Non-analaysts - Stampedecon AI Summit 2017
 
Foundations of Machine Learning - StampedeCon AI Summit 2017
Foundations of Machine Learning - StampedeCon AI Summit 2017Foundations of Machine Learning - StampedeCon AI Summit 2017
Foundations of Machine Learning - StampedeCon AI Summit 2017
 
Don't Start from Scratch: Transfer Learning for Novel Computer Vision Problem...
Don't Start from Scratch: Transfer Learning for Novel Computer Vision Problem...Don't Start from Scratch: Transfer Learning for Novel Computer Vision Problem...
Don't Start from Scratch: Transfer Learning for Novel Computer Vision Problem...
 
Bringing the Whole Elephant Into View Can Cognitive Systems Bring Real Soluti...
Bringing the Whole Elephant Into View Can Cognitive Systems Bring Real Soluti...Bringing the Whole Elephant Into View Can Cognitive Systems Bring Real Soluti...
Bringing the Whole Elephant Into View Can Cognitive Systems Bring Real Soluti...
 
Automated AI The Next Frontier in Analytics - StampedeCon AI Summit 2017
Automated AI The Next Frontier in Analytics - StampedeCon AI Summit 2017Automated AI The Next Frontier in Analytics - StampedeCon AI Summit 2017
Automated AI The Next Frontier in Analytics - StampedeCon AI Summit 2017
 
AI in the Enterprise: Past, Present & Future - StampedeCon AI Summit 2017
AI in the Enterprise: Past,  Present &  Future - StampedeCon AI Summit 2017AI in the Enterprise: Past,  Present &  Future - StampedeCon AI Summit 2017
AI in the Enterprise: Past, Present & Future - StampedeCon AI Summit 2017
 
A Different Data Science Approach - StampedeCon AI Summit 2017
A Different Data Science Approach - StampedeCon AI Summit 2017A Different Data Science Approach - StampedeCon AI Summit 2017
A Different Data Science Approach - StampedeCon AI Summit 2017
 
Graph in Customer 360 - StampedeCon Big Data Conference 2017
Graph in Customer 360 - StampedeCon Big Data Conference 2017Graph in Customer 360 - StampedeCon Big Data Conference 2017
Graph in Customer 360 - StampedeCon Big Data Conference 2017
 
End-to-end Big Data Projects with Python - StampedeCon Big Data Conference 2017
End-to-end Big Data Projects with Python - StampedeCon Big Data Conference 2017End-to-end Big Data Projects with Python - StampedeCon Big Data Conference 2017
End-to-end Big Data Projects with Python - StampedeCon Big Data Conference 2017
 
Doing Big Data Using Amazon's Analogs - StampedeCon Big Data Conference 2017
Doing Big Data Using Amazon's Analogs - StampedeCon Big Data Conference 2017Doing Big Data Using Amazon's Analogs - StampedeCon Big Data Conference 2017
Doing Big Data Using Amazon's Analogs - StampedeCon Big Data Conference 2017
 
Enabling New Business Capabilities with Cloud-based Streaming Data Architectu...
Enabling New Business Capabilities with Cloud-based Streaming Data Architectu...Enabling New Business Capabilities with Cloud-based Streaming Data Architectu...
Enabling New Business Capabilities with Cloud-based Streaming Data Architectu...
 
Big Data Meets IoT: Lessons From the Cloud on Polling, Collecting, and Analyz...
Big Data Meets IoT: Lessons From the Cloud on Polling, Collecting, and Analyz...Big Data Meets IoT: Lessons From the Cloud on Polling, Collecting, and Analyz...
Big Data Meets IoT: Lessons From the Cloud on Polling, Collecting, and Analyz...
 
Innovation in the Data Warehouse - StampedeCon 2016
Innovation in the Data Warehouse - StampedeCon 2016Innovation in the Data Warehouse - StampedeCon 2016
Innovation in the Data Warehouse - StampedeCon 2016
 
Creating a Data Driven Organization - StampedeCon 2016
Creating a Data Driven Organization - StampedeCon 2016Creating a Data Driven Organization - StampedeCon 2016
Creating a Data Driven Organization - StampedeCon 2016
 
Using The Internet of Things for Population Health Management - StampedeCon 2016
Using The Internet of Things for Population Health Management - StampedeCon 2016Using The Internet of Things for Population Health Management - StampedeCon 2016
Using The Internet of Things for Population Health Management - StampedeCon 2016
 
Turn Data Into Actionable Insights - StampedeCon 2016
Turn Data Into Actionable Insights - StampedeCon 2016Turn Data Into Actionable Insights - StampedeCon 2016
Turn Data Into Actionable Insights - StampedeCon 2016
 

Recently uploaded

9711147426✨Call In girls Gurgaon Sector 31. SCO 25 escort service
9711147426✨Call In girls Gurgaon Sector 31. SCO 25 escort service9711147426✨Call In girls Gurgaon Sector 31. SCO 25 escort service
9711147426✨Call In girls Gurgaon Sector 31. SCO 25 escort servicejennyeacort
 
ASML's Taxonomy Adventure by Daniel Canter
ASML's Taxonomy Adventure by Daniel CanterASML's Taxonomy Adventure by Daniel Canter
ASML's Taxonomy Adventure by Daniel Cantervoginip
 
Generative AI for Social Good at Open Data Science East 2024
Generative AI for Social Good at Open Data Science East 2024Generative AI for Social Good at Open Data Science East 2024
Generative AI for Social Good at Open Data Science East 2024Colleen Farrelly
 
Industrialised data - the key to AI success.pdf
Industrialised data - the key to AI success.pdfIndustrialised data - the key to AI success.pdf
Industrialised data - the key to AI success.pdfLars Albertsson
 
办理(UWIC毕业证书)英国卡迪夫城市大学毕业证成绩单原版一比一
办理(UWIC毕业证书)英国卡迪夫城市大学毕业证成绩单原版一比一办理(UWIC毕业证书)英国卡迪夫城市大学毕业证成绩单原版一比一
办理(UWIC毕业证书)英国卡迪夫城市大学毕业证成绩单原版一比一F La
 
Amazon TQM (2) Amazon TQM (2)Amazon TQM (2).pptx
Amazon TQM (2) Amazon TQM (2)Amazon TQM (2).pptxAmazon TQM (2) Amazon TQM (2)Amazon TQM (2).pptx
Amazon TQM (2) Amazon TQM (2)Amazon TQM (2).pptxAbdelrhman abooda
 
Dubai Call Girls Wifey O52&786472 Call Girls Dubai
Dubai Call Girls Wifey O52&786472 Call Girls DubaiDubai Call Girls Wifey O52&786472 Call Girls Dubai
Dubai Call Girls Wifey O52&786472 Call Girls Dubaihf8803863
 
Building on a FAIRly Strong Foundation to Connect Academic Research to Transl...
Building on a FAIRly Strong Foundation to Connect Academic Research to Transl...Building on a FAIRly Strong Foundation to Connect Academic Research to Transl...
Building on a FAIRly Strong Foundation to Connect Academic Research to Transl...Jack DiGiovanna
 
GA4 Without Cookies [Measure Camp AMS]
GA4 Without Cookies [Measure Camp AMS]GA4 Without Cookies [Measure Camp AMS]
GA4 Without Cookies [Measure Camp AMS]📊 Markus Baersch
 
DBA Basics: Getting Started with Performance Tuning.pdf
DBA Basics: Getting Started with Performance Tuning.pdfDBA Basics: Getting Started with Performance Tuning.pdf
DBA Basics: Getting Started with Performance Tuning.pdfJohn Sterrett
 
B2 Creative Industry Response Evaluation.docx
B2 Creative Industry Response Evaluation.docxB2 Creative Industry Response Evaluation.docx
B2 Creative Industry Response Evaluation.docxStephen266013
 
NLP Project PPT: Flipkart Product Reviews through NLP Data Science.pptx
NLP Project PPT: Flipkart Product Reviews through NLP Data Science.pptxNLP Project PPT: Flipkart Product Reviews through NLP Data Science.pptx
NLP Project PPT: Flipkart Product Reviews through NLP Data Science.pptxBoston Institute of Analytics
 
EMERCE - 2024 - AMSTERDAM - CROSS-PLATFORM TRACKING WITH GOOGLE ANALYTICS.pptx
EMERCE - 2024 - AMSTERDAM - CROSS-PLATFORM  TRACKING WITH GOOGLE ANALYTICS.pptxEMERCE - 2024 - AMSTERDAM - CROSS-PLATFORM  TRACKING WITH GOOGLE ANALYTICS.pptx
EMERCE - 2024 - AMSTERDAM - CROSS-PLATFORM TRACKING WITH GOOGLE ANALYTICS.pptxthyngster
 
办理学位证纽约大学毕业证(NYU毕业证书)原版一比一
办理学位证纽约大学毕业证(NYU毕业证书)原版一比一办理学位证纽约大学毕业证(NYU毕业证书)原版一比一
办理学位证纽约大学毕业证(NYU毕业证书)原版一比一fhwihughh
 
科罗拉多大学波尔得分校毕业证学位证成绩单-可办理
科罗拉多大学波尔得分校毕业证学位证成绩单-可办理科罗拉多大学波尔得分校毕业证学位证成绩单-可办理
科罗拉多大学波尔得分校毕业证学位证成绩单-可办理e4aez8ss
 
INTERNSHIP ON PURBASHA COMPOSITE TEX LTD
INTERNSHIP ON PURBASHA COMPOSITE TEX LTDINTERNSHIP ON PURBASHA COMPOSITE TEX LTD
INTERNSHIP ON PURBASHA COMPOSITE TEX LTDRafezzaman
 
PKS-TGC-1084-630 - Stage 1 Proposal.pptx
PKS-TGC-1084-630 - Stage 1 Proposal.pptxPKS-TGC-1084-630 - Stage 1 Proposal.pptx
PKS-TGC-1084-630 - Stage 1 Proposal.pptxPramod Kumar Srivastava
 
From idea to production in a day – Leveraging Azure ML and Streamlit to build...
From idea to production in a day – Leveraging Azure ML and Streamlit to build...From idea to production in a day – Leveraging Azure ML and Streamlit to build...
From idea to production in a day – Leveraging Azure ML and Streamlit to build...Florian Roscheck
 
办美国阿肯色大学小石城分校毕业证成绩单pdf电子版制作修改#真实留信入库#永久存档#真实可查#diploma#degree
办美国阿肯色大学小石城分校毕业证成绩单pdf电子版制作修改#真实留信入库#永久存档#真实可查#diploma#degree办美国阿肯色大学小石城分校毕业证成绩单pdf电子版制作修改#真实留信入库#永久存档#真实可查#diploma#degree
办美国阿肯色大学小石城分校毕业证成绩单pdf电子版制作修改#真实留信入库#永久存档#真实可查#diploma#degreeyuu sss
 
NLP Data Science Project Presentation:Predicting Heart Disease with NLP Data ...
NLP Data Science Project Presentation:Predicting Heart Disease with NLP Data ...NLP Data Science Project Presentation:Predicting Heart Disease with NLP Data ...
NLP Data Science Project Presentation:Predicting Heart Disease with NLP Data ...Boston Institute of Analytics
 

Recently uploaded (20)

9711147426✨Call In girls Gurgaon Sector 31. SCO 25 escort service
9711147426✨Call In girls Gurgaon Sector 31. SCO 25 escort service9711147426✨Call In girls Gurgaon Sector 31. SCO 25 escort service
9711147426✨Call In girls Gurgaon Sector 31. SCO 25 escort service
 
ASML's Taxonomy Adventure by Daniel Canter
ASML's Taxonomy Adventure by Daniel CanterASML's Taxonomy Adventure by Daniel Canter
ASML's Taxonomy Adventure by Daniel Canter
 
Generative AI for Social Good at Open Data Science East 2024
Generative AI for Social Good at Open Data Science East 2024Generative AI for Social Good at Open Data Science East 2024
Generative AI for Social Good at Open Data Science East 2024
 
Industrialised data - the key to AI success.pdf
Industrialised data - the key to AI success.pdfIndustrialised data - the key to AI success.pdf
Industrialised data - the key to AI success.pdf
 
办理(UWIC毕业证书)英国卡迪夫城市大学毕业证成绩单原版一比一
办理(UWIC毕业证书)英国卡迪夫城市大学毕业证成绩单原版一比一办理(UWIC毕业证书)英国卡迪夫城市大学毕业证成绩单原版一比一
办理(UWIC毕业证书)英国卡迪夫城市大学毕业证成绩单原版一比一
 
Amazon TQM (2) Amazon TQM (2)Amazon TQM (2).pptx
Amazon TQM (2) Amazon TQM (2)Amazon TQM (2).pptxAmazon TQM (2) Amazon TQM (2)Amazon TQM (2).pptx
Amazon TQM (2) Amazon TQM (2)Amazon TQM (2).pptx
 
Dubai Call Girls Wifey O52&786472 Call Girls Dubai
Dubai Call Girls Wifey O52&786472 Call Girls DubaiDubai Call Girls Wifey O52&786472 Call Girls Dubai
Dubai Call Girls Wifey O52&786472 Call Girls Dubai
 
Building on a FAIRly Strong Foundation to Connect Academic Research to Transl...
Building on a FAIRly Strong Foundation to Connect Academic Research to Transl...Building on a FAIRly Strong Foundation to Connect Academic Research to Transl...
Building on a FAIRly Strong Foundation to Connect Academic Research to Transl...
 
GA4 Without Cookies [Measure Camp AMS]
GA4 Without Cookies [Measure Camp AMS]GA4 Without Cookies [Measure Camp AMS]
GA4 Without Cookies [Measure Camp AMS]
 
DBA Basics: Getting Started with Performance Tuning.pdf
DBA Basics: Getting Started with Performance Tuning.pdfDBA Basics: Getting Started with Performance Tuning.pdf
DBA Basics: Getting Started with Performance Tuning.pdf
 
B2 Creative Industry Response Evaluation.docx
B2 Creative Industry Response Evaluation.docxB2 Creative Industry Response Evaluation.docx
B2 Creative Industry Response Evaluation.docx
 
NLP Project PPT: Flipkart Product Reviews through NLP Data Science.pptx
NLP Project PPT: Flipkart Product Reviews through NLP Data Science.pptxNLP Project PPT: Flipkart Product Reviews through NLP Data Science.pptx
NLP Project PPT: Flipkart Product Reviews through NLP Data Science.pptx
 
EMERCE - 2024 - AMSTERDAM - CROSS-PLATFORM TRACKING WITH GOOGLE ANALYTICS.pptx
EMERCE - 2024 - AMSTERDAM - CROSS-PLATFORM  TRACKING WITH GOOGLE ANALYTICS.pptxEMERCE - 2024 - AMSTERDAM - CROSS-PLATFORM  TRACKING WITH GOOGLE ANALYTICS.pptx
EMERCE - 2024 - AMSTERDAM - CROSS-PLATFORM TRACKING WITH GOOGLE ANALYTICS.pptx
 
办理学位证纽约大学毕业证(NYU毕业证书)原版一比一
办理学位证纽约大学毕业证(NYU毕业证书)原版一比一办理学位证纽约大学毕业证(NYU毕业证书)原版一比一
办理学位证纽约大学毕业证(NYU毕业证书)原版一比一
 
科罗拉多大学波尔得分校毕业证学位证成绩单-可办理
科罗拉多大学波尔得分校毕业证学位证成绩单-可办理科罗拉多大学波尔得分校毕业证学位证成绩单-可办理
科罗拉多大学波尔得分校毕业证学位证成绩单-可办理
 
INTERNSHIP ON PURBASHA COMPOSITE TEX LTD
INTERNSHIP ON PURBASHA COMPOSITE TEX LTDINTERNSHIP ON PURBASHA COMPOSITE TEX LTD
INTERNSHIP ON PURBASHA COMPOSITE TEX LTD
 
PKS-TGC-1084-630 - Stage 1 Proposal.pptx
PKS-TGC-1084-630 - Stage 1 Proposal.pptxPKS-TGC-1084-630 - Stage 1 Proposal.pptx
PKS-TGC-1084-630 - Stage 1 Proposal.pptx
 
From idea to production in a day – Leveraging Azure ML and Streamlit to build...
From idea to production in a day – Leveraging Azure ML and Streamlit to build...From idea to production in a day – Leveraging Azure ML and Streamlit to build...
From idea to production in a day – Leveraging Azure ML and Streamlit to build...
 
办美国阿肯色大学小石城分校毕业证成绩单pdf电子版制作修改#真实留信入库#永久存档#真实可查#diploma#degree
办美国阿肯色大学小石城分校毕业证成绩单pdf电子版制作修改#真实留信入库#永久存档#真实可查#diploma#degree办美国阿肯色大学小石城分校毕业证成绩单pdf电子版制作修改#真实留信入库#永久存档#真实可查#diploma#degree
办美国阿肯色大学小石城分校毕业证成绩单pdf电子版制作修改#真实留信入库#永久存档#真实可查#diploma#degree
 
NLP Data Science Project Presentation:Predicting Heart Disease with NLP Data ...
NLP Data Science Project Presentation:Predicting Heart Disease with NLP Data ...NLP Data Science Project Presentation:Predicting Heart Disease with NLP Data ...
NLP Data Science Project Presentation:Predicting Heart Disease with NLP Data ...
 

Getting Started with Keras and TensorFlow - StampedeCon AI Summit 2017

  • 1. Getting Started with Keras and TensorFlow using Python Presented by Jeff Heaton, Ph.D. October 17, 2017 – StampedeCON: AI Summit 2017, St. Louis, MO.
  • 2. Jeff Heaton, Ph.D. • Lead Data Scientist at RGA • Adjunct Instructor at Washington University • Fellow of the Life Management Institute (FLMI) • Senior Member of IEEE • Kaggler (Expert level) • http://www.jeffheaton.com – (contact info at my website)
  • 3. T81-558: Applications of Deep Learning • Course Website: https://sites.wustl.edu/jeffheaton/t81-558/ • Instructor Website: https://sites.wustl.edu/jeffheaton/ • Course Videos: https://www.youtube.com/user/HeatonResearch
  • 4. Presentation Outline • Deep Learning Framework Landscape • TensorFlow as a Compute Graph/Engine • Keras and TensorFlow • Keras: Classification • Keras: Regression • Keras: Computer Vision and CNN • Keras: Time Series and RNN • GPU
  • 5. Deep Learning Framework Landscape (discontinued)
  • 7. What are Tensors? Why are they flowing? • Tensor of Rank 0 (or scaler) – simple variable • Tensor of Rank 1 (or vector) – array/list • Tensor of Rank 2 (or matrix) – 2D array • Tensor of Rank 3 (or cube) – 3D array • Tensor of Rank 4 (tesseract/hypercube) – 4D array • Higher ranks (hypercube) – nD array
  • 8. TensorFlow as a Compute Graph/Engine
  • 9. What are Tensors? Why are they flowing? • Tensor of Rank 0 (or scaler) – simple variable • Tensor of Rank 1 (or vector) – array/list • Tensor of Rank 2 (or matrix) – 2D array • Tensor of Rank 3 (or cube) – 3D array • Tensor of Rank 4 (tesseract/hypercube) – 4D array • Higher ranks (hypercube) – nD array
  • 10. What is a Computation Graph? import tensorflow as tf matrix1 = tf.constant([[3., 3.]]) matrix2 = tf.constant([[2.],[2.]]) product = tf.matmul(matrix1, matrix2) with tf.Session() as sess: result = sess.run([product]) print(result)
  • 11. Computation Graph with Variables import tensorflow as tf sess = tf.InteractiveSession() x = tf.Variable([1.0, 2.0]) a = tf.constant([3.0, 3.0]) x.initializer.run() sub = tf.subtract(x, a) print(sub.eval()) # ==> [-2. -1.] sess.run(x.assign([4.0, 6.0])) print(sub.eval()) # ==> [1. 3.]
  • 12. Computation Graph for Mandelbrot Set
  • 13. Mandelbrot Set Review • Some point c is a complex number with x as the real part, y as the imaginary part. • z0 = 0 • z1 = c • z2 = z1 2 + c • ... • zn+1 = zn 2 + c
  • 14. Mandelbrot Rendering in TensorFlow xs = tf.constant(Z.astype(np.complex64)) zs = tf.Variable(xs) ns = tf.Variable(tf.zeros_like(xs, tf.float32)) tf.global_variables_initializer().run() # Compute the new values of z: z^2 + x zs_ = zs*zs + xs # Have we diverged with this new value? not_diverged = tf.abs(zs_) < 4 step = tf.group( zs.assign(zs_), ns.assign_add(tf.cast(not_diverged, tf.float32)) ) for i in range(200): step.run()
  • 16. Tools Used in this Presentation • Anaconda Python 3.6 • Google TensorFlow 1.2 • Keras 2.0.6 • Scikit-Learn • Jupyter Notebooks
  • 17. Installing These Tools • Install Anaconda Python 3.6 • Then run the following: – conda install scipy – pip install sklearn – pip install pandas – pip install pandas-datareader – pip install matplotlib – pip install pillow – pip install requests – pip install h5py – pip install tensorflow==1.2.1 – pip install keras==2.0.6
  • 18. Keras and TensorFlow Keras TF Learn TensorFlow Compute Graphs CPU GPU Theano Compute Graphs MXNet Compute Graphs
  • 19. Anatomy of a Neural Network • Input Layer – Maps inputs to the neural network. • Hidden Layer(s) – Helps form prediction. • Output Layer – Provides prediction based on inputs. • Context Layer – Holds state between calls to the neural network for predictions.
  • 20. • Deep learning is almost always applied to neural networks. • A deep neural network has more than 2 hidden layers. • Deep neural networks have existed as long as traditional neural networks. – We just did not have a way to train deep neural networks. – Hinton (et al.) introduced a means to train deep belief neural networks in 2006. • Neural networks have risen three times and fallen twice in their history. Currently, they are on the rise. What is Deep Learning
  • 21. The True Believers – Luminaries of Deep Learning • From left to right: • Yann LeCun • Geoffrey Hinton • Yoshua Bengio • Andrew Ng
  • 22. • Deep neural networks often accomplish the same task as other models, such as: – Support Vector Machines – Random Forests – Gradient Boosted Machines • For many problems deep learning will give a less accurate answer than the other models. • However, for certain problems, deep neural networks perform considerably better than other models. Why Use Deep Learning
  • 23. Why Deep Learning (high y-axis is good)
  • 24. Supervised or Unsupervised? Supervised Machine Learning • Usually classification or regression. • For an input, the correct output is provided. • Examples of supervised learning: – Propensity to buy – Credit scoring Unsupervised Machine Learning • Usually clustering. • Inputs analyzed without any specification of a correct output. • Examples of unsupervised learning: – Clustering – Dimension reduction
  • 25. Types of Machine Learning Algorithm • Clustering: Group records together that have similar field values. For example, customers with common attributes in a propensity to buy model. • Regression: Learn to predict a numeric outcome field, based on all of the other fields present in each record. For example, predict the amount of coverage a potential customer might buy. • Classification: Learn to predict a non-numeric outcome field. For example, learn the type of policy an existing customer has a potential of buying next.
  • 26. Problems that Deep Learning is Well Suited to
  • 28. • Classic classification problem. • 150 rows with 4 predictor columns. • All 150 rows are labeled as a species of iris. • Three different iris species. • Created by Sir Ronald Fisher in 1936. • Predictors: – Petal length – Petal width – Sepal length – Sepal width The Classic Iris Dataset
  • 29. The Classic Iris Dataset sepal_length sepal_width petal_length petal_width class 5.1 3.5 1.4 0.2 Iris-setosa 7.0 3.2 4.7 1.4 Iris-versico 6.3 3.3 6.0 2.5 Iris-virginica 6.4 3.2 4.5 1.5 Iris-versicolor 5.8 2.7 5.1 1.9 Iris-virginica 4.9 3.0 1.4 0.2 Iris-setosa … … … … …
  • 30. Are the Iris Data Predictive?
  • 31. Keras Classification: Load and Train/Test Split path = "./data/" filename = os.path.join(path,"iris.csv") df = pd.read_csv(filename,na_values=['NA','?']) species = encode_text_index(df,"species") x,y = to_xy(df,"species") # Split into train/test x_train, x_test, y_train, y_test = train_test_split( x, y, test_size=0.25, random_state=42)
  • 32. Keras Classification: Build NN and Fit model = Sequential() model.add(Dense(10, input_dim=x.shape[1], kernel_initializer='normal', activation='relu')) model.add(Dense(1, kernel_initializer='normal')) model.add(Dense(y.shape[1],activation='softmax')) model.compile(loss='categorical_crossentropy', optimizer='adam') monitor = EarlyStopping(monitor='val_loss', min_delta=1e-3, patience=5, verbose=1, mode='auto') model.fit(x,y,validation_data=(x_test,y_test),callbacks=[monitor],ve rbose=2,epochs=1000)
  • 33. Keras Classification: Build NN and Fit # Evaluate success using accuracy # raw probabilities to chosen class (highest probability) pred = model.predict(x_test) pred = np.argmax(pred,axis=1) y_compare = np.argmax(y_test,axis=1) score = metrics.accuracy_score(y_compare, pred) print("Accuracy score: {}".format(score))
  • 35. • Classic regression problem. • Target: mpg • Predictors: – cylinders – displacement – horsepower – weight – acceleration – year – origin – name Predict a Car’s Miles Per Gallon (MPG)
  • 36. Predict a Car’s Miles Per Gallon (MPG) mpg cylinders displace ment horsepow er weight accelerati on year origin name 18 8 307 130 3504 12 70 1 chevrolet chevelle malibu 15 8 350 165 3693 11.5 70 1 buick skylark 320 18 8 318 150 3436 11 70 1 plymouth satellite 16 8 304 150 3433 12 70 1 amc rebel sst 17 8 302 140 3449 10.5 70 1 ford torino 15 8 429 198 4341 10 70 1 ford galaxie 500 14 8 454 220 4354 9 70 1 chevrolet impala
  • 37. • Models such as GBM or Neural Network can predict the MPG to around +/-2.7 accuracy. • Result of regression can be given in equation form (though not as accurate as a model): Regression Models - MPG
  • 38. Keras Regression: Load and Train/Test Split path = "./data/" filename_read = os.path.join(path,"auto-mpg.csv") df = pd.read_csv(filename_read,na_values=['NA','?']) cars = df['name'] df.drop('name',1,inplace=True) missing_median(df, 'horsepower') x,y = to_xy(df,"mpg")
  • 39. Keras Regression: Build and Fit model = Sequential() model.add(Dense(10, input_dim=x.shape[1], kernel_initializer='normal', activation='relu')) model.add(Dense(1, kernel_initializer='normal')) model.compile(loss='mean_squared_error', optimizer='adam') monitor = EarlyStopping(monitor='val_loss', min_delta=1e-3, patience=5, verbose=1, mode='auto') model.fit(x,y,validation_data=(x_test,y_test),callbacks=[monitor],ve rbose=2,epochs=1000)
  • 40. Keras Regression: Predict and Evaluate # Predict pred = model.predict(x_test) # Measure RMSE error. RMSE is common for regression. score = np.sqrt(metrics.mean_squared_error(pred,y_test)) print("Final score (RMSE): {}".format(score))
  • 41. • The iris and MPG datasets are nicely formatted. • Real world data is a complex mix of XML, JSON, textual formats, binary formats, and web service accessed content (the variety V in “Big Data”). • More complex security data will be presented later in this talk. Preparing Data for Predictive Modeling is Hard
  • 43. • We will usually use classification, though regression is still an option. • The input to the neural network is now 3D (height, width, color). • Data are not transformed, no zscores or dummy variables. • Processing time is usually much longer. • We now have different layer types: dense layers (just like before), convolution layers and max pooling layers. • Data will no longer arrive as CSV files. TensorFlow provides some utilities for going directly from image to the input of a neural network. Predicting Images: What is Different?
  • 44. Sources of Image Data: CIFAR10 and CIFAR100
  • 45. Sources of Image Data: ImageNet
  • 46. Sources of Training Data: The MNIST Data Set
  • 48. Convolutional Neural Networks (CNN) A LeNET-5/CNN Network (LeCun, 1998) Dense Layers - Fully connected layers. Convolution Layers - Used to scan across images. Max Pooling Layers - Used to downsample images. Dropout Layer - Used to add regularization.
  • 49. Loading the Digits (x_train, y_train), (x_test, y_test) = mnist.load_data() print("Shape of x_train: {}".format(x_train.shape)) print("Shape of y_train: {}".format(y_train.shape)) print() print("Shape of x_test: {}".format(x_test.shape)) print("Shape of y_test: {}".format(y_test.shape)) Shape of x_train: (60000, 28, 28) Shape of y_train: (60000,) Shape of x_test: (10000, 28, 28) Shape of y_test: (10000,)
  • 50. Display a Digit %matplotlib inline import matplotlib.pyplot as plt import numpy as np digit = 101 # Change to choose new digit a = x_train[digit] plt.imshow(a, cmap='gray', interpolation='nearest') print("Image (#{}): Which is digit '{}'". format(digit,y_train[digit]))
  • 51. Build the CNN Network model = Sequential() model.add(Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=input_shape)) model.add(Conv2D(64, (3, 3), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.25)) model.add(Flatten()) model.add(Dense(128, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(num_classes, activation='softmax')) model.compile(loss=keras.losses.categorical_crossentropy, optimizer=keras.optimizers.Adadelta(), metrics=['accuracy'])
  • 52. Fit and Evaluate model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, verbose=2, validation_data=(x_test, y_test)) score = model.evaluate(x_test, y_test, verbose=0) print('Test loss: {}'.format(score[0])) print('Test accuracy: {}'.format(score[1])) Test loss: 0.03047790436172363 Test accuracy: 0.9902 Elapsed time: 1:30:40.79 (for CPU, approx 30 min GPU)
  • 54. • RNN = Recurrent Neural Network. • LSTM = Long Short Term Memory. • Most models will always produce the same output for the same input. • Previous input does not matter to a non-recurrent neural network. • To convert today’s temperature from Fahrenheit to Celsius, the value of yesterdays temperature does not matter. • To predict tomorrow’s closing price for a stock you need more than just today’s price. • To determine if a packet is part of an attack, previous packets must be considered. How is a RNN Different?
  • 55. • The LSTM units in a deep neural network are short-term memory. • This short term memory is governed by 3 gates: – Input Gate: When do we remember? – Output Gate: When do we act? – Forget Gate: When do we forget? How do LSTM’s Work?
  • 56. Sample Recurrent Data: Stock Price & Volume x = [ [[32,1383],[41,2928],[39,8823],[20,1252],[15,1532]], [[35,8272],[32,1383],[41,2928],[39,8823],[20,1252]], [[37,2738],[35,8272],[32,1383],[41,2928],[39,8823]], [[34,2845],[37,2738],[35,8272],[32,1383],[41,2928]], [[32,2345],[34,2845],[37,2738],[35,8272],[32,1383]], ] y = [ 1, -1, 0, -1, 1 ]
  • 57. LSTM Example max_features = 4 # 0,1,2,3 (total of 4) x = [ [[0],[1],[1],[0],[0],[0]], [[0],[0],[0],[2],[2],[0]], [[0],[0],[0],[0],[3],[3]], [[0],[2],[2],[0],[0],[0]], [[0],[0],[3],[3],[0],[0]], [[0],[0],[0],[0],[1],[1]] ] x = np.array(x,dtype=np.float32) y = np.array([1,2,3,2,3,1],dtype=np.int32)
  • 58. Build a LSTM model = Sequential() model.add(LSTM(128, dropout=0.2, recurrent_dropout=0.2, input_dim=1)) model.add(Dense(4, activation='sigmoid')) model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
  • 59. Test the LSTM def runit(model, inp): inp = np.array(inp,dtype=np.float32) pred = model.predict(inp) return np.argmax(pred[0]) print( runit( model, [[[0],[0],[0],[0],[3],[3]]] )) 3 print( runit( model, [[[4],[4],[0],[0],[0],[0]]] )) 4
  • 60. GPU’s and Deep Learning
  • 61. Low Level GPU Frameworks • CUDA CUDA is NVidia's low-level GPGPU framework. • OpenCL An open framework supporting CPU’s, GPU’s and other devices. Managed by the Khronos Group.
  • 62. Thank you! • Jeff Heaton • http://www.jeffheaton.com