KDD2016勉強会 https://atnd.org/events/80771
論文:“Why Should I Trust You?”Explaining the Predictions of Any Classifier
著者:M. T. Ribeiro and S. Singh and C. Guestrin
論文リンク: http://www.kdd.org/kdd2016/subtopic/view/why-should-i-trust-you-explaining-the-predictions-of-any-classifier
KDD2016勉強会 https://atnd.org/events/80771
論文:“Why Should I Trust You?”Explaining the Predictions of Any Classifier
著者:M. T. Ribeiro and S. Singh and C. Guestrin
論文リンク: http://www.kdd.org/kdd2016/subtopic/view/why-should-i-trust-you-explaining-the-predictions-of-any-classifier
論文紹介:
Pan, Wei-Xing, et al. "Dopamine cells respond to predicted events during classical conditioning: evidence for eligibility traces in the reward-learning network." The Journal of neuroscience 25.26 (2005): 6235-6242.
This document presents Principal Sensitivity Analysis (PSA) as a method to summarize and visualize the knowledge learned by machine learning models. PSA identifies the principal directions in the input space that the model is most sensitive to through Principal Sensitivity Maps (PSMs). PSMs can distinguish how different input features characterize different classes. Local sensitivity measures show how PSMs contribute to specific classifications. PSA was demonstrated on a neural network for digit classification, finding that different PSMs helped distinguish different digit pairs. PSA provides insights into machine learning models beyond what is possible with traditional sensitivity analysis.
Li, Mu, et al. "Efficient mini-batch training for stochastic optimization." Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 2014.
http://www.cs.cmu.edu/~muli/file/minibatch_sgd.pdf
KDD2014勉強会関西会場: http://www.ml.ist.i.kyoto-u.ac.jp/kdd2014reading
論文紹介:
Pan, Wei-Xing, et al. "Dopamine cells respond to predicted events during classical conditioning: evidence for eligibility traces in the reward-learning network." The Journal of neuroscience 25.26 (2005): 6235-6242.
This document presents Principal Sensitivity Analysis (PSA) as a method to summarize and visualize the knowledge learned by machine learning models. PSA identifies the principal directions in the input space that the model is most sensitive to through Principal Sensitivity Maps (PSMs). PSMs can distinguish how different input features characterize different classes. Local sensitivity measures show how PSMs contribute to specific classifications. PSA was demonstrated on a neural network for digit classification, finding that different PSMs helped distinguish different digit pairs. PSA provides insights into machine learning models beyond what is possible with traditional sensitivity analysis.
Li, Mu, et al. "Efficient mini-batch training for stochastic optimization." Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 2014.
http://www.cs.cmu.edu/~muli/file/minibatch_sgd.pdf
KDD2014勉強会関西会場: http://www.ml.ist.i.kyoto-u.ac.jp/kdd2014reading
18. 参考文献
● Zhai et al., “DeepIntent: Learning Attentions for Online Advertising with Recurrent Neural Networks.” KDD2016.
● Sutskever et al., “Sequence to sequence learning with neural networks.” NIPS, 2014.
● Bahdanau et al., “Neural machine translation by jointly learning to align and translate.” ICLR, 2015.
● Wu et al., “Google's Neural Machine Translation System: Bridging the Gap between Human and Machine
Translation.” arXiv, 2016.
● Xie et al., “Neural language correction with character-based attention.” arXiv, 2016.
● Xu et al., “Show, attend and tell: Neural image caption generation with visual attention.” ICML, 2015.
● Sorokin et al., “Deep Attention Recurrent Q-Network.” NIPS(WS), 2015.