KDD2016勉強会 The Limits of Popularity-Based Recommendations, and the Role of So...yyammt
This mathematical document discusses the limits of popularity-based recommendations and the role of social ties. It presents three cases for a variable t: when t is less than 0, when t is greater than 0, and when t equals 0, with an ellipsis where the equal to 0 case is not fully defined.
KDD2016勉強会 https://atnd.org/events/80771
論文:“Why Should I Trust You?”Explaining the Predictions of Any Classifier
著者:M. T. Ribeiro and S. Singh and C. Guestrin
論文リンク: http://www.kdd.org/kdd2016/subtopic/view/why-should-i-trust-you-explaining-the-predictions-of-any-classifier
This document provides 21 tips for affiliate marketers to improve their performance marketing. Some key points include focusing on building trust with readers by genuinely recommending products used personally, promoting products that help solve problems for readers, diversifying promoted products across different price points, balancing affiliate and non-affiliate content, and always including affiliate links as nofollow links. The tips emphasize quality content over sales messages and highlight the importance of cross-promoting affiliate products on different platforms and channels.
KDD2016勉強会 The Limits of Popularity-Based Recommendations, and the Role of So...yyammt
This mathematical document discusses the limits of popularity-based recommendations and the role of social ties. It presents three cases for a variable t: when t is less than 0, when t is greater than 0, and when t equals 0, with an ellipsis where the equal to 0 case is not fully defined.
KDD2016勉強会 https://atnd.org/events/80771
論文:“Why Should I Trust You?”Explaining the Predictions of Any Classifier
著者:M. T. Ribeiro and S. Singh and C. Guestrin
論文リンク: http://www.kdd.org/kdd2016/subtopic/view/why-should-i-trust-you-explaining-the-predictions-of-any-classifier
This document provides 21 tips for affiliate marketers to improve their performance marketing. Some key points include focusing on building trust with readers by genuinely recommending products used personally, promoting products that help solve problems for readers, diversifying promoted products across different price points, balancing affiliate and non-affiliate content, and always including affiliate links as nofollow links. The tips emphasize quality content over sales messages and highlight the importance of cross-promoting affiliate products on different platforms and channels.
3. 著者はGoogle Research所属 (一部 Brain Team)
• https://research.google.com/teams/brain/
2
First Author
Deep Learning Model
NLP ( Graph propagation)
Smart Reply : Automated Response Suggestion for Email
4. (提案手法) SmartReply のコンポーネント図
3
4つのコンポーンネントで構成される。
① Response selection in Section 3.
② Response set generation in Section 4.
③ Diversity in Section 5.
④Triggering model in Section 6.
①
②
③
④
Smart Reply : Automated Response Suggestion for Email
7. (提案手法) SmartReply のコンポーネント図
6
for scalability
for utility
for response quality
for utility
for scalability
for scalability
for privacy
①
②
③
④
4つのコンポーンネントで構成される。
① Response selection in Section 3.
② Response set generation in Section 4.
③ Diversity in Section 5.
④Triggering model in Section 6.
LSTM modelを拡張 したモデルを学習
Semi-Supervised Graph Construction (Intent)
を学習し、Response Target Spaceを構築
Omitting Redundant / Enforcing Negative
ResponseするmessageかをFFNNで学習
Smart Reply : Automated Response Suggestion for Email
9. LSTM Model の概要
8Smart Reply : Automated Response Suggestion for Email
LSTM model 自体への改良
Preprocessing, Addition of a recurrent projection layer, gradient clipping ( with the value of 1 )
RNNで学習できるタイムラグ LSTMで学習できるタイムラグ
LSTM Model
http://colah.github.io/posts/2015-08-Understanding-LSTMs/
10. Sequence to sequence Learning with Neural Network.
9Smart Reply : Automated Response Suggestion for Email
Recurrent Continuous Translation Models
(EMNLP 13)
MLNLP 13 と類似して、Input Context を v に変換するLSTM
逐次出力を予測するLSTMの2階層モデルを採用
11. ③Suggestion Diversity
• サービスに適用するために2つの仮説を提案
• 同じカテゴリ(Intent)の応答支援は必要ない。
• 学習データに含まれる応答は、ポジティブな応答が多い
10
尤度最大化では選ばれないためネガティブな応答に限定して探索する
Ex. Sure , I’ll be there. Yes, I’ll be there. Yeah, I’ll be there.
Response set generation で構築したグラフから得られるクラスからは1メッセージだけ提示
Smart Reply : Automated Response Suggestion for Email
12. EVALUATION AND RESULTS
• Data: サンプリングしたアカウントのデータ(238 million messages)
46%のメッセージが応答ありのデータ
• 評価結果
• Perplexity: 31.4 (n-grams language model) -> 17.0 (Propose)
11
45% , 35%, 20% (1-3 positions)
Results : Smart Replyのサービスは全受信の10%の応答で利用中
Smart Reply : Automated Response Suggestion for Email
13. 参考文献
• Sequence to Sequence Learning with Neural Networks(NIPS 14)
• Recurrent Continuous Translation Models(EMNLP 13)
• LONG SHORT TERM MEMORY (
• NEURAL MACHINE TRANSLATION BY JOINTLY LEARNING TO ALIGN
AND TRANSLATE ( ICLR 15)
• Learning Phrase Representations using RNN Encoder-Decoder for
Stastical Machine Translation(
• Grammar as a Foreign Language(
12Smart Reply : Automated Response Suggestion for Email