Li, Mu, et al. "Efficient mini-batch training for stochastic optimization." Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 2014.
http://www.cs.cmu.edu/~muli/file/minibatch_sgd.pdf
KDD2014勉強会関西会場: http://www.ml.ist.i.kyoto-u.ac.jp/kdd2014reading
Li, Mu, et al. "Efficient mini-batch training for stochastic optimization." Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 2014.
http://www.cs.cmu.edu/~muli/file/minibatch_sgd.pdf
KDD2014勉強会関西会場: http://www.ml.ist.i.kyoto-u.ac.jp/kdd2014reading
論文紹介:
Pan, Wei-Xing, et al. "Dopamine cells respond to predicted events during classical conditioning: evidence for eligibility traces in the reward-learning network." The Journal of neuroscience 25.26 (2005): 6235-6242.
Sotetsu Koyamada (Presenter), Masanori Koyama, Ken Nakae, Shin Ishii
Graduate School of Informatics, Kyoto University
[Abstract]
We present a novel algorithm (Principal Sensitivity Analysis; PSA) to analyze the knowledge of the classifier obtained from supervised machine learning techniques. In particular, we define principal sensitivity map (PSM) as the direction on the input space to which the trained classifier is most sensitive, and use analogously defined k-th PSM to define a basis for the input space. We train neural networks with artificial data and real data, and apply the algorithm to the obtained supervised classifiers. We then visualize the PSMs to demonstrate the PSA’s ability to decompose the knowledge acquired by the trained classifiers.
[Keywords]
Sensitivity analysis Sensitivity map PCA Dark knowledge Knowledge decomposition
@PAKDD2015
May 20, 2015
Ho Chi Minh City, Viet Namﳟ
http://link.springer.com/chapter/10.1007%2F978-3-319-18038-0_48#page-1
論文紹介:
Pan, Wei-Xing, et al. "Dopamine cells respond to predicted events during classical conditioning: evidence for eligibility traces in the reward-learning network." The Journal of neuroscience 25.26 (2005): 6235-6242.
Sotetsu Koyamada (Presenter), Masanori Koyama, Ken Nakae, Shin Ishii
Graduate School of Informatics, Kyoto University
[Abstract]
We present a novel algorithm (Principal Sensitivity Analysis; PSA) to analyze the knowledge of the classifier obtained from supervised machine learning techniques. In particular, we define principal sensitivity map (PSM) as the direction on the input space to which the trained classifier is most sensitive, and use analogously defined k-th PSM to define a basis for the input space. We train neural networks with artificial data and real data, and apply the algorithm to the obtained supervised classifiers. We then visualize the PSMs to demonstrate the PSA’s ability to decompose the knowledge acquired by the trained classifiers.
[Keywords]
Sensitivity analysis Sensitivity map PCA Dark knowledge Knowledge decomposition
@PAKDD2015
May 20, 2015
Ho Chi Minh City, Viet Namﳟ
http://link.springer.com/chapter/10.1007%2F978-3-319-18038-0_48#page-1
33. 参考文献
引用論文
n Seide, Frank, Gang Li, and Dong Yu. "Conversational Speech Transcription
Using Context-Dependent Deep Neural Networks." Interspeech. 2011.
n Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton. "Imagenet
classification with deep convolutional neural networks." Advances in neural
information processing systems. 2012.
n LeCun, Yann, et al. "Gradient-based learning applied to document recognition."
Proceedings of the IEEE 86.11 (1998): 2278-2324.
n Wolf, Lior. "DeepFace: Closing the Gap to Human-Level Performance in Face
Verification.”
n Le, Quoc V. "Building high-level features using large scale unsupervised
learning." Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE
International Conference on. IEEE, 2013.
その他参考にした資料
n http://deeplearning.net/tutorial/ (CNNの解説)
n http://www.slideshare.net/beam2d/deep-learning20140130(DNN流行の背景)
n http://d.hatena.ne.jp/repose/20130508/1368020782(KaggleでのDNN)