第2回nips+読み会: Learning to learn by gradient decent by gradient decentTaku Tsuzuki
第2回NIPS読み会の発表資料です.learning to learn by gradient decent by gradient decent. OptimizerをLSTMとして表現し,逆誤差伝播によりそれを最適化. 目的関数の成分ごと独立に,パラメタを共有したLSTMで最適化を行うことで最適化すべきOptimizerのパラメタ数を小さく抑える.
第2回nips+読み会: Learning to learn by gradient decent by gradient decentTaku Tsuzuki
第2回NIPS読み会の発表資料です.learning to learn by gradient decent by gradient decent. OptimizerをLSTMとして表現し,逆誤差伝播によりそれを最適化. 目的関数の成分ごと独立に,パラメタを共有したLSTMで最適化を行うことで最適化すべきOptimizerのパラメタ数を小さく抑える.
Approximate nearest neighbor methods and vector models – NYC ML meetupErik Bernhardsson
Nearest neighbors refers to something that is conceptually very simple. For a set of points in some space (possibly many dimensions), we want to find the closest k neighbors quickly.
This presentation covers a library called Annoy built my me that that helps you do (approximate) nearest neighbor queries in high dimensional spaces. We're going through vector models, how to measure similarity, and why nearest neighbor queries are useful.