The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train.
Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.0 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.
[기초개념] Recurrent Neural Network (RNN) 소개Donghyeon Kim
* 시계열 데이터의 시간적 속성을 이용하는 RNN과 그 한계점을 극복하기 위한 LSTM, GRU 기법에 대해 기본적인 개념을 소개합니다.
* 광주과학기술원 인공지능 스터디 A-GIST 모임에서 발표했습니다.
* 발표 영상 (유튜브, 한국어): https://youtu.be/Dt2SCbKbKvs
The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train.
Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.0 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.
[기초개념] Recurrent Neural Network (RNN) 소개Donghyeon Kim
* 시계열 데이터의 시간적 속성을 이용하는 RNN과 그 한계점을 극복하기 위한 LSTM, GRU 기법에 대해 기본적인 개념을 소개합니다.
* 광주과학기술원 인공지능 스터디 A-GIST 모임에서 발표했습니다.
* 발표 영상 (유튜브, 한국어): https://youtu.be/Dt2SCbKbKvs
인공지능 변호사 개발 1편 - Ai lawyer 개발을 위한 시도
(법령해석 문서의 분류편)
Word2Vec을 이용한 법령해석 문서의 자동분류 시도
(ai lawyer, artificial intelligence, gensim, law, nlp, nltk, word2vec, 데이터분석, 머신러닝, 인공지능, 인공지능 변호사)
Thanks to a partnership with Jumpshot, Moz is presenting data about Google's search growth, click distribution, and more via a panel of millions of US web users.