参考資料
• GTC JAPAN 2015
• 1011:ヤフー音声認識サービス「YJVOICE」におけるディープラーニング
活用事例
• https://youtu.be/PzyV7cPe5bk
• ICASSP 2016 : Plenary talk
(http://2016.ieeeicassp.org/PlenarySpeakers.asp)
• Li Deng - Microsoft Research
“Deep Learning for AI: From Machine Perception to Machine
Cognition”
• http://2016.ieeeicassp.org/SP16_PlenaryDeng_Slides.pdf
29
参考資料
• Baidu Research : Deep Speech 2
Amodei, Dario, et al. "Deep speech 2: End-to-end speech recognition in
english and mandarin." Proceedings of The 33rd International Conference
on Machine Learning, pp. 173–182, 2016.
• http://jmlr.org/proceedings/papers/v48/amodei16.html
• https://arxiv.org/abs/1512.02595
30
参考資料
• Theano : multi GPU
• https://github.com/Theano/Theano/wiki/Using-Multiple-GPUs
• https://github.com/uoguelph-mlrg/theano_multi_gpu
• Using multiple GPUs — Theano 0.8.2 documentation
• http://deeplearning.net/software/theano/tutorial/using_multi_gpu.html
• Chainer User Group
• Free memory of cupy.ndarray - Google グループ
• https://groups.google.com/d/msg/chainer/E5ygPRt-hD8/YHIz7FHbBQAJ
• ミニバッチ学習でのデータシャッフルの方法 (GPUを使って学習する場合) - Google
グループ
• https://groups.google.com/d/msg/chainer/ZNyjR2Czo1c/uNVeHuTXAwAJ
31
参考資料
• Baidu Research : Persistent RNNs
Diamos, Gregory, et al. "Persistent RNNs: Stashing Recurrent Weights On-Chip."
Proceedings of The 33rd International Conference on Machine Learning, pp. 2024–2033,
2016.
• https://svail.github.io/persistent_rnns/
• http://jmlr.org/proceedings/papers/v48/diamos16.html
• https://github.com/baidu-research/persistent-rnn
• Microsoft : 1bit-SGD
Seide, Frank, et al. "1-bit stochastic gradient descent and its application to data-parallel
distributed training of speech DNNs." INTERSPEECH. 2014.
• https://www.microsoft.com/en-us/research/publication/1-bit-stochastic-
gradient-descent-and-application-to-data-parallel-distributed-training-of-
speech-dnns/
• http://www.isca-speech.org/archive/interspeech_2014/i14_1058.html
32