EMNLP 2015 読み会 @ 小町研 "Morphological Analysis for Unsegmented Languages using ...Yuki Tomo
首都大学東京 情報通信システム学域 小町研究室に行われた EMNLP 2015 読み会で "Morphological Analysis for Unsegmented Languages using Recurrent Neural Network Language Model" を紹介した際の資料です。
EMNLP 2015 読み会 @ 小町研 "Morphological Analysis for Unsegmented Languages using ...Yuki Tomo
首都大学東京 情報通信システム学域 小町研究室に行われた EMNLP 2015 読み会で "Morphological Analysis for Unsegmented Languages using Recurrent Neural Network Language Model" を紹介した際の資料です。
文献紹介:SemEval-2012 Task 1: English Lexical SimplificationTomoyuki Kajiwara
Lucia Specia, Sujay Kumar Jauhar, Rada Mihalcea. SemEval-2012 Task 1: English Lexical Simplification. In Proceedings of the 6th International Workshop on Semantic Evaluation (SemEval-2012), pp.347-355, 2012.
Lyan Verwimp, Joris Pelemans, Hugo Van hamme, Patrick Wambacq, Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 417–427, Valencia, Spain, April 3-7, 2017
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
http://arxiv.org/abs/1609.08144
を読んでみたので、簡単にまとめました。間違い等は是非ご指摘ください。
文献紹介:SemEval-2012 Task 1: English Lexical SimplificationTomoyuki Kajiwara
Lucia Specia, Sujay Kumar Jauhar, Rada Mihalcea. SemEval-2012 Task 1: English Lexical Simplification. In Proceedings of the 6th International Workshop on Semantic Evaluation (SemEval-2012), pp.347-355, 2012.
Lyan Verwimp, Joris Pelemans, Hugo Van hamme, Patrick Wambacq, Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 417–427, Valencia, Spain, April 3-7, 2017
Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation
http://arxiv.org/abs/1609.08144
を読んでみたので、簡単にまとめました。間違い等は是非ご指摘ください。
[PACLING2019] Improving Context-aware Neural Machine Translation with Target-...Hayahide Yamagishi
This is the slide used in the oral presentation at PACLING2019.
(For Japanese speakers) 本発表は私の修論発表と同等ですので、日本語がわかる方は以下のスライドの方が読みやすいかもしれません。
https://www.slideshare.net/HayahideYamagishi/ss-181147693/HayahideYamagishi/ss-181147693
Font has been changed the original one (Hiragino Maru Gothic Pro W4) into the other one by the SlideShare.
[EMNLP2016読み会] Memory-enhanced Decoder for Neural Machine Translation
[ACL2018読み会資料] Sharp Nearby, Fuzzy Far Away: How Neural Language Models Use Context
1. Sharp Nearby, Fuzzy Far Away:
How Neural Language Models Use Context
Urvashi Khandelwal, He He, Peng Qi, Dan Jurafsky
(Stanford University)
M2 山岸駿秀 @ ACL2018読み会
2. Introduction
● n-gram Language Modelと比較して、Neural Language Model
(NLM)は長距離文脈を使えるようになったとされる
● 実際に長距離文脈を捉えられているのかをAblation Test
● Neural Cache ModelはLMにどう影響するかを調査
読んだ理由
● 文脈の知見が欲しかったから
● “We propose a novel architecture …” に疲れたから
2
3. 言語モデルの復習と今回の入力例
● 以下の確率を計算
● Negative Log Likelihoodを計算
● Perplexityで評価
... the company reported a loss after
taxation and minority interests of NUM
million irish borrowings under the
short-term parts of a credit agreement
</s> berlitz which is based in
princeton n.j. provides language
instruction and translation services
through more than NUM language centers
in NUM countries </s> in the past five
years more sim has set a fresh target
of $ NUM a share by the end of </s>
reaching that goal says robert t. UNK
applied 's chief financial officer than
NUM NUM of its sales have been outside
the u.s. </s> macmillan has owned
berlitz since NUM </s> in the first six
3