numpy
• NumPy isthe fundamental package for scientific computing with Python.
• PyTorch, Tensorflow, chainer, mxnet, Theano 都可以把資料以 numpy
格式交換
• PIL, matplotlib, scikit-image 也是以 numpy 為交換媒介。
http://www.numpy.org/
15.
Static vs. Dynamicgraph frameworks
• Static: define and run
• Caffe
• torch
• Tensorflow
• Dynamic: define by run
• Chainer
• pytorch
compiled language vs. interpreted language
resource allocation
run step by step
speed
autograd
• ALL historyof graph computation is recorded
• in Variable
• Tensor 就是高維矩陣
Variable
torch.Tensor:
input
Var 2Var 1 Var 3
Var 2Var 1 Var 3 L
39.
autograd (forward pass)
•ALL history of graph computation is recorded
• in Variable
• Tensor 就是高維矩陣
Variable
torch.Tensor:
input
pointer to
previous tensor
Var 2Var 1 Var 3
Var 2Var 1 Var 3 L
torch.Tensor:
output
(self)
40.
autograd (backward pass)
•ALL history of graph computation is recorded
• in Variable
• Tensor 就是高維矩陣
Var 2Var 1 Var 3
Var 2Var 1 Var 3 L
Variable
pointer to previous
tensor
torch.Tensor:
output
(self)
torch.Tensor:
input
(previous)
torch.Tensor:
grad_output
41.
autograd (backward pass)
•ALL history of graph computation is recorded
• in Variable
• Tensor 就是高維矩陣
Var 2Var 1 Var 3
Var 2Var 1 Var 3 L
Variable
pointer to previous
tensor
torch.Tensor:
output
(self)
torch.Tensor:
input
(previous)
torch.Tensor:
grad_output
torch.Tensor:
grad_output
42.
nn
• What composesa ReLU layer?
• What composes a Convolutional layer?
• What composes a Linear (FullyConnected) layer?
43.
torch.nn vs. torch.nn.functional
•a bunch of functions/layers:
• ReLU
• Linear
• Conv
• Pooling
• ConvTranspose
• Sigmoid
• LogSoftMax
• …
http://pytorch.org/docs/master/nn.html#torch-nn-functional
Variable
torch.Tensor: input
torch.Tensor:
output
(self)
function
optimization
• 𝑊𝑡+1 =𝑊𝑡 − 𝜂Δ𝑊
• get all parameter with model.parameters()
• weight, bias, … 都會被抓過來
57.
training loop
• clearprevious gradient!!!
• get a batch of data
• forward (output get)
• by sending input tensor to model
• compute loss
• back propagation (gradient get)
• by .backward() call
• update weight
• by optim.step()