FaceBook のAIチームが研究の発表論文である "Memory networks"とその拡張である"Towards AI-complete question answering: A set of prerequisite toy tasks."を簡単に紹介します。
[1] Weston, J., Chopra, S., and Bordes, A. Memory networks. In International Conference on Learning Representations (ICLR), 2015a.
[2] Weston, J., Bordes, A., Chopra, S., and Mikolov, T. Towards AI-complete question answering: A set of prerequisite toy tasks. arXiv preprint: 1502.05698, 2015b.
[Yang, Downey and Boyd-Graber 2015] Efficient Methods for Incorporating Knowl...Shuyo Nakatani
This document summarizes a paper that proposes a new topic modeling method called SC-LDA that incorporates prior knowledge about word correlations into LDA. SC-LDA uses a factor graph to encode must-link and cannot-link constraints between words based on an external knowledge source. It then integrates this prior knowledge into the LDA inference process to influence the topic assignments. The paper experiments with SC-LDA on several datasets and knowledge sources, finding it converges faster than baselines and produces more coherent topics.
FaceBook のAIチームが研究の発表論文である "Memory networks"とその拡張である"Towards AI-complete question answering: A set of prerequisite toy tasks."を簡単に紹介します。
[1] Weston, J., Chopra, S., and Bordes, A. Memory networks. In International Conference on Learning Representations (ICLR), 2015a.
[2] Weston, J., Bordes, A., Chopra, S., and Mikolov, T. Towards AI-complete question answering: A set of prerequisite toy tasks. arXiv preprint: 1502.05698, 2015b.
[Yang, Downey and Boyd-Graber 2015] Efficient Methods for Incorporating Knowl...Shuyo Nakatani
This document summarizes a paper that proposes a new topic modeling method called SC-LDA that incorporates prior knowledge about word correlations into LDA. SC-LDA uses a factor graph to encode must-link and cannot-link constraints between words based on an external knowledge source. It then integrates this prior knowledge into the LDA inference process to influence the topic assignments. The paper experiments with SC-LDA on several datasets and knowledge sources, finding it converges faster than baselines and produces more coherent topics.
13. 言語モデルの計算に用いるCST
• 言語モデルの計算をするため
に以下のCSTを構築する
• CST
• テキストT
• アルファベットΣ = {テキスト中
に出現する単語}
• reversed CST
• テキストTの単語の出現順番を
逆にしたreversed テキスト
• アルファベットΣ = {テキスト中
に出現する単語}
Σ={the, old, night, keeper, keeps, keep, in, town, #}
T =“#the old night keeper keeps the keep in the town# the night keeper keeps the keep in the night#$”