(Presentation at COSCUP 2012) Discuss why you should try to develop your own operating system and how you can speed up by taking the microkernel approach.
(Presentation at COSCUP 2012) Discuss why you should try to develop your own operating system and how you can speed up by taking the microkernel approach.
Using Arduino as a front end to detect temperature as a streaming data goining MQTT and through Spark streaming for a near realtime process back to mysql database.
Also provide another Arduino lighting on if the counting measure is over than threshold for a realtime experiment case.
Using Arduino as a front end to detect temperature as a streaming data goining MQTT and through Spark streaming for a near realtime process back to mysql database.
Also provide another Arduino lighting on if the counting measure is over than threshold for a realtime experiment case.
Mesos-based Data Infrastructure @ DoubanZhong Bo Tian
How to build an elastic and efficient platform to support various Big Data and Machine Learning tasks is a challenge for a lot of corporations. In this presentation, Zhongbo Tian will give an overview of the Mesos-based core infrastructure of Douban, and demonstrate how to integrate the platform with state-of-art Big Data/ML technologies.
Tutorial of LinkIt 7697 IoT dev board, including Arduino IDE setup, BlocklyDuino GUI, Mediatel Cloud Sandbox and how to interact throught BLE (App Inventor).
LinkIt 7697物聯網開發板教學,包含Arduino IDE、BlocklyDuino圖形化介面、MCS雲服務以及使用 App Inventor 進行 BLE 互動.
Power by CAVEDU Education http://www.cavedu.com;
App Inventor TW http://www.appinventor.tw
Maximize Your Production Effort (Chinese)slantsixgames
Efficient Content Authoring Tools and Pipeline for Inter-Studio Asset Development
With the complexity of today's video games and their associated tight timelines, it is paramount for video game studios to have a highly efficient content authoring process and production workflow. With a trend towards outsourced development of game assets, there are additional considerations that are important for achieving optimal workflow between studios that are co-developing or sharing assets. This lecture gives valuable insight into how to create new content authoring tools and data transformation pipelines that promote efficient work flow for both internal and remote production teams. Specific considerations for outsourcing and worldwide development are made along the way.
5. 這兩年做的和深度學習相關的事:
• image processing
• medical image
• predicts human age given brain MRI images
• mechanical engineering
• predicts system friction given environment parameters
• ITRI projects
• 3D object recognition, image deblurring
• 打雜
• GPU cluster management (low cost, but high performance and usability)
14. numpy
• NumPy is the fundamental package for scientific computing with Python.
• PyTorch, Tensorflow, chainer, mxnet, Theano 都可以把資料以 numpy
格式交換
• PIL, matplotlib, scikit-image 也是以 numpy 為交換媒介。
http://www.numpy.org/
15. Static vs. Dynamic graph frameworks
• Static: define and run
• Caffe
• torch
• Tensorflow
• Dynamic: define by run
• Chainer
• pytorch
compiled language vs. interpreted language
resource allocation
run step by step
speed
38. autograd
• ALL history of graph computation is recorded
• in Variable
• Tensor 就是高維矩陣
Variable
torch.Tensor:
input
Var 2Var 1 Var 3
Var 2Var 1 Var 3 L
39. autograd (forward pass)
• ALL history of graph computation is recorded
• in Variable
• Tensor 就是高維矩陣
Variable
torch.Tensor:
input
pointer to
previous tensor
Var 2Var 1 Var 3
Var 2Var 1 Var 3 L
torch.Tensor:
output
(self)
40. autograd (backward pass)
• ALL history of graph computation is recorded
• in Variable
• Tensor 就是高維矩陣
Var 2Var 1 Var 3
Var 2Var 1 Var 3 L
Variable
pointer to previous
tensor
torch.Tensor:
output
(self)
torch.Tensor:
input
(previous)
torch.Tensor:
grad_output
41. autograd (backward pass)
• ALL history of graph computation is recorded
• in Variable
• Tensor 就是高維矩陣
Var 2Var 1 Var 3
Var 2Var 1 Var 3 L
Variable
pointer to previous
tensor
torch.Tensor:
output
(self)
torch.Tensor:
input
(previous)
torch.Tensor:
grad_output
torch.Tensor:
grad_output
42. nn
• What composes a ReLU layer?
• What composes a Convolutional layer?
• What composes a Linear (FullyConnected) layer?
43. torch.nn vs. torch.nn.functional
• a bunch of functions/layers:
• ReLU
• Linear
• Conv
• Pooling
• ConvTranspose
• Sigmoid
• LogSoftMax
• …
http://pytorch.org/docs/master/nn.html#torch-nn-functional
Variable
torch.Tensor: input
torch.Tensor:
output
(self)
function
56. optimization
• 𝑊𝑡+1 = 𝑊𝑡 − 𝜂Δ𝑊
• get all parameter with model.parameters()
• weight, bias, … 都會被抓過來
57. training loop
• clear previous gradient!!!
• get a batch of data
• forward (output get)
• by sending input tensor to model
• compute loss
• back propagation (gradient get)
• by .backward() call
• update weight
• by optim.step()