A Random Forest using a Multi-valued Decision Diagram on an FPGaHiroki Nakahara
The ISMVL (Int'l Symp. on Multiple-Valued Logic) presentation slide on May, 22nd, 2017 at Novi Sad, Serbia. It is a kind of machine learning to realize a high-performance and low power.
論文紹介:Dueling network architectures for deep reinforcement learningKazuki Adachi
Wang, Ziyu, et al. "Dueling network architectures for deep reinforcement learning." Proceedings of The 33rd International Conference on Machine Learning, PMLR 48:1995-2003, 2016.
6. AIにおけるDeep Neural Network
(DNN)
6
J. Park, “Deep Neural Network SoC: Bringing deep learning to mobile
devices,” Deep Neural Network SoC Workshop, 2016.
Brain Inspired
AI
Machine
Learning
Deep
Learning
DNN RNN
Silicon retina
Neuromorphic
Attention
based processing
Electronic
cochlea
Bio‐mimic
Fuzzy logic
Knowledge
representation
Natural lang.
proc.
Genetic
algorithm
SVM
Decision Tree
K‐nearest
neighbor
Bayesian
7. Artificial Neuron (AN)
+
x0=1
x1
x2
xN
... w0 (Bias)
w1
w2
wN
f(u)
u y
xi: Input signal
wi: Weight
u: Internal state
f(u): Activation function
(Sigmoid, ReLU, etc.)
y: Output signal
y f (u)
u wi xi
i0
N
7
17. Artificial Neuron (AN)
+
x0=1
x1
x2
xN
... w0 (Bias)
w1
w2
wN
f(u)
u y
xi: Input signal
wi: Weight
u: Internal state
f(u): Activation function
(Sigmoid, ReLU, etc.)
y: Output signal
y f (u)
u wi xi
i0
N
17
19. 要求スペック
• サーバーにおけるディープラーニングでは20億回の積和
(MAC)演算が必要︕︕
19
J. Park, “Deep Neural Network SoC: Bringing deep learning to mobile
devices,” Deep Neural Network SoC Workshop, 2016.
J. Cong and B. Xiao, “Minimizing computation in convolutional
neural networks,” Artificial Neural Networks and Machine Learning
(ICANN2014), 2014, pp. 281-290.
31. Binarized DCNN
• Treats only binarized (+1/-1) values (weights and inouts)
• Except for the first and the last layers
+
x0=1
x1
x2
xN
...
w0 (Bias)
w1
w2
wN
sign(u)
u s
X: Input (8bit for the layer 1)
si: Output
wi: Weight
U: Internal state (integer)
Sign(U): Sign bit for U
+1 or ‐1
M. Courbariaux, I. Hubara, D. Soudry, R.E.Yaniv, Y. Bengio, “Binarized neural networks: Training deep neural networks
with weights and activations constrained to +1 or -1,” Computer Research Repository (CoRR), Mar., 2016,
http://arxiv.org/pdf/1602.02830v3.pdf
31