기계독해 QA:
검색인가, NLP인가?
이름 : 서민준
소속 : NAVER / Clova ML
QA = Question Answering
너 큰일난듯. 탑항공 폐업했대!
*실제로 일어난 일
허럴? 진짜?
왜 폐업했대?
몰라
내 표 환불가능할까?
도와줘 네이버!
도움이 안되는 친굴세.
전화도 안받어…
CONTENTS
1. 검색으로 “찾는” QA – 10분
2. NLP로 “읽는” QA – 10분
3. 검색과 NLP의 접점 – 20분
4. Q&A – 5분
1. 검색으로 “찾는” QA
탑항공 폐업
탑항공 폐업
• 내용 및 제목의 관련성
• 비슷한 검색을 한 유저가 읽은 문서
• 웹사이트의 신뢰도
• 문서의 인기도
• 검색자의 정보
• …
종합적으로 고려해요!
탑항공 폐업
• 내용 및 제목의 관련성
• 비슷한 검색을 한 유저가 읽은 문서
• 웹사이트의 신뢰도
• 문서의 인기도
• 검색자의 정보
• …
Word Matching
검색한 단어가 존재하는 문서를 가져옴
• Ctrl-F
• 제목에만 적용할 경우 꽤 효과적임
“탑항공이 폐업한게 진짜임?”
“탑항공 폐업 레알?”
TF-IDF
Term Frequency – Inverse Document Frequency
• 중요 키워드 (흔하지 않은 단어)에 더 높은 가중치를 줌.
• 질문이 길어지고 문서 내용 검색을 한다면 필수
Okapi BM25
“Best Matching” (Robertson et al., 1970s)
• TF-IDF 의 “업그레이드 버젼”
• TF 부분을 변경
왜 더했다 뺐다
그러는 거야
LSA
Latent Semantic Analysis (Deerwester et al., 1988)
• Bag of words (sparse) à dense vector via SVD
• 각 단어에 추상적인 “태그”를 달아줌
• 추상적인 ”태그”를 통해 다른 단어끼리도 비교할 수 있게 됨.
• “폐업” ~ “망하다” ~ “몰락”
내가 궁금한 걸 꼭 집어서 알려줄 순 없어?
문서는 찾아드릴 수 있는데요…
검색 != 문장 독해
검색의 한계
문장을 “읽는” 것이 아니다
• 단어 (lexical) 수준의 정보습득은 가능하나…
• 문법적 (syntactic) 또는 의미적 (semantic) 맥락을 파악 못함.
• 문서나 문단 수준 이상으로 “꼭 집어서” 답을 가져오기 힘듬.
2. NLP로 “읽는” QA
게으른 우리가 원하는 것
제가 읽어봤는데요,
대내외적인 경영환경 악화로
폐업했대요.
왜 폐업했대?
똑똑하구만!
기계학습의 첫 단계:
인풋 ,아웃풋 정의하기
탑항공이 왜 폐업했대?
대내외적인 경영환경 악화
Inputs Output
일단 인풋과
아웃풋을
정의해보잣
생성 or 추출?
생성모델은 매력적으로 보이지만…
Generative Model 의 문제점
서비스 퀄리티가 안나온다.
• 엉뚱한 답을 내는 경우가 너무 많음.
• 데이터 퀄리티 컨트롤이 어려움. (예: MS MARCO1)
1 Nguyen et al. MS MARCO: A human generated machine reading
comprehension dataset. 2016.
평가 (Evaluation) 도 어렵다.
• BLEU 가 있기는 하지만…
결국 Extractive
5분만에 보는
Neural Extractive QA Trend
7 Milestones in Extractive QA
1. Sentence-level QA (May 2015)
2. Phrase-level QA (May 2016)
3. Cross-attention (Nov 2016)
4. Self-attention (Mar 2017)
5. Transfer learning (Nov 2017)
6. Super-human level (Jan 2018)
7. What’s next? (Nov 2018)
Task definition
Models
7 Milestones in Extractive QA
1. Sentence-level QA (May 2015)
2. Phrase-level QA (May 2016)
3. Cross-attention (Nov 2016)
4. Self-attention (Mar 2017)
5. Transfer learning (Nov 2017)
6. Super-human level (Jan 2018)
7. What’s next? (Nov 2018)
1. Sentence-level QA
Second Epistle to the Corinthians The
Second Epistle to the Corinthians, often
referred to as Second Corinthians (and
written as 2 Corinthians), is the eighth book
of the New Testament of the Bible. Paul the
Apostle and “Timothy our brother” wrote
this epistle to “the church of God which is at
Corinth, with all the saints which are in all
Achaia”.
Who wrote second
Corinthians?
Yang et al. WikiQA: A Challenge Dataset for Open-domain
Question Answering. EMNLP 2015.
1. Sentence-level QA
Second Epistle to the Corinthians The
Second Epistle to the Corinthians, often
referred to as Second Corinthians (and
written as 2 Corinthians), is the eighth book
of the New Testament of the Bible. Paul the
Apostle and “Timothy our brother” wrote
this epistle to “the church of God which is
at Corinth, with all the saints which are in
all Achaia”.
Who wrote second
Corinthians?
Yang et al. WikiQA: A Challenge Dataset for Open-domain
Question Answering. EMNLP 2015.
하지만 답이 너무 길다…
답만 딱 보여줄 수 없을까?
2. Phrase-level QA
Second Epistle to the Corinthians The
Second Epistle to the Corinthians, often
referred to as Second Corinthians (and
written as 2 Corinthians), is the eighth book
of the New Testament of the Bible. Paul the
Apostle and “Timothy our brother” wrote
this epistle to “the church of God which is at
Corinth, with all the saints which are in all
Achaia”.
Who wrote second
Corinthians?
Rajpurkar et al. SQuAD: 100,000+ Questions for Machine
Comprehension of Text. EMNLP 2016
2. Phrase-level QA
Second Epistle to the Corinthians The
Second Epistle to the Corinthians, often
referred to as Second Corinthians (and
written as 2 Corinthians), is the eighth book
of the New Testament of the Bible. Paul the
Apostle and “Timothy our brother” wrote
this epistle to “the church of God which is at
Corinth, with all the saints which are in all
Achaia”.
Who wrote second
Corinthians?
Rajpurkar et al. SQuAD: 100,000+ Questions for Machine
Comprehension of Text. EMNLP 2016
SQuAD: 100,000+
2년동안
100+ models!
7 Milestones in Extractive QA
1. Sentence-level QA (May 2015)
2. Phrase-level QA (May 2016)
3. Cross-attention (Nov 2016)
4. Self-attention (Mar 2017)
5. Transfer learning (Nov 2017)
6. Super-human level (Jan 2018)
7. What’s next? (Nov 2018)
2. Cross-attention
Second Epistle to the Corinthians The
Second Epistle to the Corinthians, often
referred to as Second Corinthians (and
written as 2 Corinthians), is the eighth book
of the New Testament of the Bible. Paul the
Apostle and “Timothy our brother” wrote
this epistle to “the church of God which is at
Corinth, with all the saints which are in all
Achaia”.
Who wrote second
Corinthians?
문서를 읽으면서 질문을 참고
질문을 읽으면서 문서를 참고
2. Cross-attention
Second Epistle to the Corinthians The
Second Epistle to the Corinthians, often
referred to as Second Corinthians (and
written as 2 Corinthians), is the eighth book
of the New Testament of the Bible. Paul the
Apostle and “Timothy our brother” wrote
this epistle to “the church of God which is at
Corinth, with all the saints which are in all
Achaia”.
Who wrote second
Corinthians?
Seo et al. Bi-directional attention flow for machine
comprehension. ICLR 2017.
2. Self-attention
Second Epistle to the Corinthians The
Second Epistle to the Corinthians, often
referred to as Second Corinthians (and
written as 2 Corinthians), is the eighth book
of the New Testament of the Bible. Paul the
Apostle and “Timothy our brother” wrote
this epistle to “the church of God which is at
Corinth, with all the saints which are in all
Achaia”.
Who wrote second
Corinthians?
Clark & Gardner. Simple and effective multi-paragraph reading
comprehension. 2017
문서를 읽으면서 문서의 다른
부분을 참고
2. Self-attention
Second Epistle to the Corinthians The
Second Epistle to the Corinthians, often
referred to as Second Corinthians (and
written as 2 Corinthians), is the eighth book
of the New Testament of the Bible. Paul the
Apostle and “Timothy our brother” wrote
this epistle to “the church of God which is at
Corinth, with all the saints which are in all
Achaia”.
Who wrote second
Corinthians?
Clark & Gardner. Simple and effective multi-paragraph reading
comprehension. 2017
Unlabeled Corpus를 활용할 수
없을까?
4. Transfer learning
3 billion words, unlabeled 2 million words, labeled
Language
model
Peters et al. Deep contextualized word representations.
NAACL 2018.
컴퓨터가
사람을 이길 수 있을까?
5. Super-human level
• Ensemble
• NLP tools (POS, parser, etc.)
• Data Augmentation
• A lot of layers
Hi, Nice to
meet you!
MT 안녕, 반가워! MT
Hello, great
to see you!
• 이….
• 것…
• 저…
• 것…
Yu et al. QANet: Combining local convolution with global self-
attention for reading comprehension. ICLR 2018.
오늘 4시
Track 4
사람보다
5% 높음.
7 Milestones in Extractive QA
1. Sentence-level QA (May 2015)
2. Phrase-level QA (May 2016)
3. Cross-attention (Nov 2016)
4. Self-attention (Mar 2017)
5. Transfer learning (Nov 2017)
6. Super-human level (Jan 2018)
7. What’s next? (Nov 2018)
QuAC (Conversational)
Choi et al., EMNLP 2018
HotpotQA (Reasoning)
Yang et al., EMNLP 2018
정확한 건 좋은데, 얼마나 걸려?
음… GPU를 사용하면 한 문서 읽는데 0.1초정도?
0.1초
하지만
Linear-time 의 굴레
에서 벗어날 수가 없다.
Microsoft Research Asia. R-Net: machine reading comprehension
with self matching networks. 2017.
문서 560만개
단어 30억개
정확한 건 좋은데, 얼마나 걸려?
음… GPU를 사용하면 한 문서 읽는데 0.1초정도?
그러니까… 6일정도요.
3. 검색과 NLP의 접점
질문 하나에 1주일?
!#$@*%(@*@
아 그러면 검색을 이용해서 문서를 찾고,
그거만 읽을게요!
Solution 1: 찾고나서 읽자!
1961
Chen et al. Reading Wikipedia to Answer Open-Domain
Questions. ACL 2017.
잠깐, 그런데 검색엔진이 잘못된 답을
내면 어떡하지?
“탑항공이 폐업한게 진짜임?”
Error Propagation 뿐만 아니라
엘레강스가 부족하다
*End-to-End 중독
1961
하나의 깔끔한 모델
?
5.8일
문서 560만개
4,000x 느리다5,000,000x 짧고
0.1초 CPU
Titan Xp
200억배 빠르게?!
Solution 2: 찾기와 읽기를 동시에?
검색은 어떻게 문서를 빨리 찾을까?
그린팩토리 도서관이군.
역사
미술
IT
.
.
.
소설
한국전쟁은 언제 터졌어?
A
B
K
.
.
.
벡터를 정리하자
[0.3, 0.5, …]
[0.7, 0.1, …]
[0.6, 0.2, …]
.
.
.
[0.4, 0.4, …]
한국전쟁은
언제 터졌어?
[…]
[…]
[…]
.
.
.
[0.5, 0.1, …]
[0.3, 0.4, …]
[0.4, 0.5, …]
[0.8, 0.1, …]
[0.4, 0.4, …]
[0.4, 0.3, …]
Locality-Sensitive Hashing
비슷한 아이템의 충돌을 최대화
MIPS
Locality-Sensitive Hashing (LSH)
• Symmetric: distance functions (Nearest Neighbor Search)
• L2
• L1
• Cosine
• Asymmetric: inner product (MIPS)
• Dot product
!(#$%
log $ )
!(#$)
*= 근사 factor (<1)
Shrivastava and Li. Asymmetric lsh (alsh) for sublinear time maximum
inner product search (mips). NIPS 2014.
Sublinear-time 근사값.
아주 빠르다.
문서 à 구문 (Phrase)?
Super Bowl 50 !"
American football game !#
National Football League !$
Denver Broncos !%
…
Which NFL team
represented the
AFC at
Super Bowl 50?
&
MIPS
수식으로 보는 기존과 비교
• 문서 d 와 쿼리 q 가 주어졌을 때:
!" = argmax
)
*+("|.; 0)
*+("|.; 0) ∝ exp(5+ ", ., 0 )
*+("|.; 0) ∝ exp(7+(.) 8 9+(", 0))
where
기존: 매 새로운 질문마다 F
를 재계산 해야 함.
제안: H 는 미리 계산 될 수
있고 index (hash) 될 수도
있음
Decomposition
그러나
Decomposition이 매우 어렵다
새로운 연구문제:
Phrase-Indexed QA (PIQA)
5분간 듣는 PIQA 1년 삽질기
1. Baseline 은 그리 어렵지 않았다
2. Duality의 활용
3. Multimodality…
4. Sparsity: 단번에 9% 업!
5. Scalability: 가능은 하지만 만만치 않은…
작년 6월부터
Baseline 1: LSTM
… water transforms into steam within a boiler … What does water turn into when heated?
Document Question
Bi-LSTM Bi-LSTM
!" !# !$ !% !& !' !(
Weighted Sum
)
Nearest Neighbor
Baseline 2: Self-Attention
… water transforms into steam within a boiler … What does water turn into when heated?
Document Question
!" !# !$ !% !& !' !(
“steam” “water” + “transform” + “boiler” “What” “water” + “turn” + “heated”
type clue type clue
)% *
dot
SQuAD F1 (%) EM (%)
First Baseline 40.0 51.0
SOTA 91.2 85.4
PI-SQuAD F1 EM
LSTM 57.2 46.8
LSTM+SA 59.8 49.0
Seo et al. Phrase-indexed question answering: a new challenge
toward scalable document comprehension. EMNLP 2018.
Duality & Multimodality
Barack Obama was 44th president from 2009 to 2017.
일대다 관계
!, #
$
$(!, #)
Q1: Who was president in 2009?
Q2: Who was the 44th president?
Duality를 활용할 수 없을까?
Duality: Question Reconstruction
What does water turn into when heated?
Question
Bi-LSTM
!" !# !$ !% !& !' !(
Weighted Sum
)
Nearest Neighbor
Generation
seq2seq decoder
(without attention)
SQuAD F1 (%) EM (%)
First Baseline 40.0 51.0
SOTA 91.2 85.4
PI-SQuAD F1 EM
LSTM 57.2 46.8
LSTM+SA 59.8 49.0
SQuAD F1 (%) EM (%)
First Baseline 40.0 51.0
SOTA 91.2 85.4
PI-SQuAD F1 EM
LSTM 57.2 46.8
LSTM+SA 59.8 49.0
LSTM+SA+Dual 63.2 52.0
어떻게 Multimodality를
해결할 수 있을까?
!"($|&; ()
Barack Obama was 44th president from 2009 to 2017.
Who was president in 2009? Who was the 44th president?
*"($|&; ()
Multimodality
이론적인 해결책
Barack Obama was 44th president from 2009 to 2017.
Q1: Who was president in 2009?
Q2: Who was the 44th president?
Latent Variable 을 사용하면 된다?
!, #
$
$(!, &1, #)
$(!, z2, #)
그래서 (1년동안!) 시도해 본 것들
1. Multiple identical models (ensemble)
2. Orthogonality regularization
3. Sequential decoding
4. Latent variable from Gaussian distribution
5. Latent variable from surrounding words
그래서 (1년동안!) 시도해 본 것들
1. Multiple identical models (ensemble)
2. Orthogonality regularization
3. Sequential decoding
4. Latent variable from Gaussian distribution
5. Latent variable from surrounding words
정확성을 좀 올려주지만,
30배 이상의 storage가 필요.
안됨…
SQuAD F1 (%) EM (%)
First Baseline 40.0 51.0
SOTA 91.2 85.4
PI-SQuAD F1 EM
LSTM 57.2 46.8
LSTM+SA 59.8 49.0
LSTM+SA+Dual 63.2 52.0
SQuAD F1 (%) EM (%)
First Baseline 40.0 51.0
SOTA 91.2 85.4
PI-SQuAD F1 EM
LSTM 57.2 46.8
LSTM+SA 59.8 49.0
LSTM+SA+Dual 63.2 52.0
LSTM+SA+Multi-mode 66.5 55.1
Dense + Sparse?
Sparse vector
“steam” “water” + “transform” + “boiler”
type clue
!"
steamboiler water transform
Dense vector
SQuAD F1 (%) EM (%)
First Baseline 40.0 51.0
SOTA 91.2 85.4
PI-SQuAD F1 EM
LSTM 57.2 46.8
LSTM+SA 59.8 49.0
LSTM+SA+Dual 63.2 52.0
LSTM+SA+Multi-mode 66.5 55.1
SQuAD F1 (%) EM (%)
First Baseline 40.0 51.0
SOTA 91.2 85.4
PI-SQuAD F1 EM
LSTM 57.2 46.8
LSTM+SA 59.8 49.0
LSTM+SA+Dual 63.2 52.0
LSTM+SA+Multi-mode 66.5 55.1
LSTM+SA+Sparse+ELMo 69.3 58.7
To be on arXiv soon
Scalability 고려사항 1
• SQuAD 는 문서 하나만 보는 것. à 벤치마크의 성격이 강함
• 실제 QA 시나리오가 아님.
• End-to-end 가 Pipeline보다 더 나을거라는 보장?
추가 실험들이 필요!
Scalability 고려사항 2
• 영어 위키피디아 단어수: 30억개
• 단어당 구문수: 평균 7개
• 구문당 vector dimension: 1024
• Float32: 4 Byte
약 90 TB (210억개의 구문)
Scalability 고려사항 2
• 영어 위키피디아 단어수: 30억개
• 단어당 구문수: 평균 7개
• 구문당 vector dimension: 1024
• Float32: 4 Byte
최적화 가능
약 90 TB (210억개의 구문)
phrase embedding에 내포된 의미?
Super Bowl 50 !"
American football game !#
National Football League !$
Denver Broncos !%
…
Which NFL team
represented the
AFC at
Super Bowl 50?
&
MIPS
According to the American Library
Association, this makes …
… tasked with drafting a European
Charter of Human Rights, …
비슷한 타입의 고유명사 (lexical)
The LM engines were successfully test-
fired and restarted, …
Steam turbines were extensively
applied …
비슷한 semantic (의미) 및 syntactic (문법) 구조
… primarily accomplished through the
ductile stretching and thinning.
… directly derived from the
homogeneity or symmetry of space …
비슷한 syntactic (문법) 구조
그러니까 결론이 뭐야?
검색과 NLP의 아름다운 조화
아직 갈길은 멀지만, 같이 연구하고
고민해 보자구요!
나는 당장 잘되는게
필요하다구
둘 다 할게요 ㅜㅜ
tl;dr:
Representing the world knowledge
in an elegant way
Thank you
Q & A
질문은 Slido에 남겨주세요.
sli.do
#deview
TRACK 2
We are Hiring!
Domains
• Speech Recognition
• Speech Synthesis
• Computer Vision
• Natural Language
• NSML / AutoML
• Finance AI
• App/Web Services
Positions
• Research Scientist
• Research Engineer
• SW Engineer
• Android / iOS Engineer
• Backend Engineer
• Data Engineer
• UI/UX Engineer
• Internship Member
• Global Residency
clova-jobs@navercorp.com

[223]기계독해 QA: 검색인가, NLP인가?

  • 1.
    기계독해 QA: 검색인가, NLP인가? 이름: 서민준 소속 : NAVER / Clova ML
  • 2.
    QA = QuestionAnswering
  • 3.
    너 큰일난듯. 탑항공폐업했대! *실제로 일어난 일 허럴? 진짜? 왜 폐업했대? 몰라 내 표 환불가능할까? 도와줘 네이버! 도움이 안되는 친굴세. 전화도 안받어…
  • 4.
    CONTENTS 1. 검색으로 “찾는”QA – 10분 2. NLP로 “읽는” QA – 10분 3. 검색과 NLP의 접점 – 20분 4. Q&A – 5분
  • 5.
  • 6.
  • 7.
    탑항공 폐업 • 내용및 제목의 관련성 • 비슷한 검색을 한 유저가 읽은 문서 • 웹사이트의 신뢰도 • 문서의 인기도 • 검색자의 정보 • … 종합적으로 고려해요!
  • 8.
    탑항공 폐업 • 내용및 제목의 관련성 • 비슷한 검색을 한 유저가 읽은 문서 • 웹사이트의 신뢰도 • 문서의 인기도 • 검색자의 정보 • …
  • 9.
    Word Matching 검색한 단어가존재하는 문서를 가져옴 • Ctrl-F • 제목에만 적용할 경우 꽤 효과적임
  • 10.
  • 11.
    TF-IDF Term Frequency –Inverse Document Frequency • 중요 키워드 (흔하지 않은 단어)에 더 높은 가중치를 줌. • 질문이 길어지고 문서 내용 검색을 한다면 필수
  • 12.
    Okapi BM25 “Best Matching”(Robertson et al., 1970s) • TF-IDF 의 “업그레이드 버젼” • TF 부분을 변경 왜 더했다 뺐다 그러는 거야
  • 13.
    LSA Latent Semantic Analysis(Deerwester et al., 1988) • Bag of words (sparse) à dense vector via SVD • 각 단어에 추상적인 “태그”를 달아줌 • 추상적인 ”태그”를 통해 다른 단어끼리도 비교할 수 있게 됨. • “폐업” ~ “망하다” ~ “몰락”
  • 14.
    내가 궁금한 걸꼭 집어서 알려줄 순 없어? 문서는 찾아드릴 수 있는데요…
  • 15.
  • 16.
    검색의 한계 문장을 “읽는”것이 아니다 • 단어 (lexical) 수준의 정보습득은 가능하나… • 문법적 (syntactic) 또는 의미적 (semantic) 맥락을 파악 못함. • 문서나 문단 수준 이상으로 “꼭 집어서” 답을 가져오기 힘듬.
  • 17.
  • 18.
  • 19.
    제가 읽어봤는데요, 대내외적인 경영환경악화로 폐업했대요. 왜 폐업했대? 똑똑하구만!
  • 20.
    기계학습의 첫 단계: 인풋,아웃풋 정의하기
  • 21.
    탑항공이 왜 폐업했대? 대내외적인경영환경 악화 Inputs Output 일단 인풋과 아웃풋을 정의해보잣
  • 22.
  • 23.
  • 24.
    Generative Model 의문제점 서비스 퀄리티가 안나온다. • 엉뚱한 답을 내는 경우가 너무 많음. • 데이터 퀄리티 컨트롤이 어려움. (예: MS MARCO1) 1 Nguyen et al. MS MARCO: A human generated machine reading comprehension dataset. 2016. 평가 (Evaluation) 도 어렵다. • BLEU 가 있기는 하지만…
  • 25.
  • 26.
  • 27.
    7 Milestones inExtractive QA 1. Sentence-level QA (May 2015) 2. Phrase-level QA (May 2016) 3. Cross-attention (Nov 2016) 4. Self-attention (Mar 2017) 5. Transfer learning (Nov 2017) 6. Super-human level (Jan 2018) 7. What’s next? (Nov 2018) Task definition Models
  • 28.
    7 Milestones inExtractive QA 1. Sentence-level QA (May 2015) 2. Phrase-level QA (May 2016) 3. Cross-attention (Nov 2016) 4. Self-attention (Mar 2017) 5. Transfer learning (Nov 2017) 6. Super-human level (Jan 2018) 7. What’s next? (Nov 2018)
  • 29.
    1. Sentence-level QA SecondEpistle to the Corinthians The Second Epistle to the Corinthians, often referred to as Second Corinthians (and written as 2 Corinthians), is the eighth book of the New Testament of the Bible. Paul the Apostle and “Timothy our brother” wrote this epistle to “the church of God which is at Corinth, with all the saints which are in all Achaia”. Who wrote second Corinthians? Yang et al. WikiQA: A Challenge Dataset for Open-domain Question Answering. EMNLP 2015.
  • 30.
    1. Sentence-level QA SecondEpistle to the Corinthians The Second Epistle to the Corinthians, often referred to as Second Corinthians (and written as 2 Corinthians), is the eighth book of the New Testament of the Bible. Paul the Apostle and “Timothy our brother” wrote this epistle to “the church of God which is at Corinth, with all the saints which are in all Achaia”. Who wrote second Corinthians? Yang et al. WikiQA: A Challenge Dataset for Open-domain Question Answering. EMNLP 2015.
  • 31.
  • 32.
    답만 딱 보여줄수 없을까?
  • 33.
    2. Phrase-level QA SecondEpistle to the Corinthians The Second Epistle to the Corinthians, often referred to as Second Corinthians (and written as 2 Corinthians), is the eighth book of the New Testament of the Bible. Paul the Apostle and “Timothy our brother” wrote this epistle to “the church of God which is at Corinth, with all the saints which are in all Achaia”. Who wrote second Corinthians? Rajpurkar et al. SQuAD: 100,000+ Questions for Machine Comprehension of Text. EMNLP 2016
  • 34.
    2. Phrase-level QA SecondEpistle to the Corinthians The Second Epistle to the Corinthians, often referred to as Second Corinthians (and written as 2 Corinthians), is the eighth book of the New Testament of the Bible. Paul the Apostle and “Timothy our brother” wrote this epistle to “the church of God which is at Corinth, with all the saints which are in all Achaia”. Who wrote second Corinthians? Rajpurkar et al. SQuAD: 100,000+ Questions for Machine Comprehension of Text. EMNLP 2016
  • 35.
  • 36.
  • 37.
    7 Milestones inExtractive QA 1. Sentence-level QA (May 2015) 2. Phrase-level QA (May 2016) 3. Cross-attention (Nov 2016) 4. Self-attention (Mar 2017) 5. Transfer learning (Nov 2017) 6. Super-human level (Jan 2018) 7. What’s next? (Nov 2018)
  • 38.
    2. Cross-attention Second Epistleto the Corinthians The Second Epistle to the Corinthians, often referred to as Second Corinthians (and written as 2 Corinthians), is the eighth book of the New Testament of the Bible. Paul the Apostle and “Timothy our brother” wrote this epistle to “the church of God which is at Corinth, with all the saints which are in all Achaia”. Who wrote second Corinthians?
  • 39.
    문서를 읽으면서 질문을참고 질문을 읽으면서 문서를 참고
  • 40.
    2. Cross-attention Second Epistleto the Corinthians The Second Epistle to the Corinthians, often referred to as Second Corinthians (and written as 2 Corinthians), is the eighth book of the New Testament of the Bible. Paul the Apostle and “Timothy our brother” wrote this epistle to “the church of God which is at Corinth, with all the saints which are in all Achaia”. Who wrote second Corinthians? Seo et al. Bi-directional attention flow for machine comprehension. ICLR 2017.
  • 42.
    2. Self-attention Second Epistleto the Corinthians The Second Epistle to the Corinthians, often referred to as Second Corinthians (and written as 2 Corinthians), is the eighth book of the New Testament of the Bible. Paul the Apostle and “Timothy our brother” wrote this epistle to “the church of God which is at Corinth, with all the saints which are in all Achaia”. Who wrote second Corinthians? Clark & Gardner. Simple and effective multi-paragraph reading comprehension. 2017
  • 43.
    문서를 읽으면서 문서의다른 부분을 참고
  • 44.
    2. Self-attention Second Epistleto the Corinthians The Second Epistle to the Corinthians, often referred to as Second Corinthians (and written as 2 Corinthians), is the eighth book of the New Testament of the Bible. Paul the Apostle and “Timothy our brother” wrote this epistle to “the church of God which is at Corinth, with all the saints which are in all Achaia”. Who wrote second Corinthians? Clark & Gardner. Simple and effective multi-paragraph reading comprehension. 2017
  • 46.
  • 47.
    4. Transfer learning 3billion words, unlabeled 2 million words, labeled Language model Peters et al. Deep contextualized word representations. NAACL 2018.
  • 49.
  • 51.
    5. Super-human level •Ensemble • NLP tools (POS, parser, etc.) • Data Augmentation • A lot of layers Hi, Nice to meet you! MT 안녕, 반가워! MT Hello, great to see you! • 이…. • 것… • 저… • 것… Yu et al. QANet: Combining local convolution with global self- attention for reading comprehension. ICLR 2018.
  • 54.
  • 55.
    7 Milestones inExtractive QA 1. Sentence-level QA (May 2015) 2. Phrase-level QA (May 2016) 3. Cross-attention (Nov 2016) 4. Self-attention (Mar 2017) 5. Transfer learning (Nov 2017) 6. Super-human level (Jan 2018) 7. What’s next? (Nov 2018)
  • 56.
    QuAC (Conversational) Choi etal., EMNLP 2018 HotpotQA (Reasoning) Yang et al., EMNLP 2018
  • 57.
    정확한 건 좋은데,얼마나 걸려? 음… GPU를 사용하면 한 문서 읽는데 0.1초정도?
  • 58.
  • 59.
    하지만 Linear-time 의 굴레 에서벗어날 수가 없다. Microsoft Research Asia. R-Net: machine reading comprehension with self matching networks. 2017.
  • 60.
  • 61.
    정확한 건 좋은데,얼마나 걸려? 음… GPU를 사용하면 한 문서 읽는데 0.1초정도? 그러니까… 6일정도요.
  • 62.
  • 63.
    질문 하나에 1주일? !#$@*%(@*@ 아그러면 검색을 이용해서 문서를 찾고, 그거만 읽을게요!
  • 64.
  • 65.
    1961 Chen et al.Reading Wikipedia to Answer Open-Domain Questions. ACL 2017.
  • 66.
    잠깐, 그런데 검색엔진이잘못된 답을 내면 어떡하지? “탑항공이 폐업한게 진짜임?”
  • 67.
  • 68.
  • 69.
  • 70.
  • 71.
  • 72.
    Solution 2: 찾기와읽기를 동시에?
  • 73.
  • 74.
  • 75.
  • 76.
  • 77.
    [0.3, 0.5, …] [0.7,0.1, …] [0.6, 0.2, …] . . . [0.4, 0.4, …] 한국전쟁은 언제 터졌어? […] […] […] . . . [0.5, 0.1, …] [0.3, 0.4, …] [0.4, 0.5, …] [0.8, 0.1, …] [0.4, 0.4, …] [0.4, 0.3, …] Locality-Sensitive Hashing 비슷한 아이템의 충돌을 최대화 MIPS
  • 78.
    Locality-Sensitive Hashing (LSH) •Symmetric: distance functions (Nearest Neighbor Search) • L2 • L1 • Cosine • Asymmetric: inner product (MIPS) • Dot product
  • 79.
    !(#$% log $ ) !(#$) *=근사 factor (<1) Shrivastava and Li. Asymmetric lsh (alsh) for sublinear time maximum inner product search (mips). NIPS 2014.
  • 80.
  • 81.
  • 82.
    Super Bowl 50!" American football game !# National Football League !$ Denver Broncos !% … Which NFL team represented the AFC at Super Bowl 50? & MIPS
  • 83.
    수식으로 보는 기존과비교 • 문서 d 와 쿼리 q 가 주어졌을 때: !" = argmax ) *+("|.; 0) *+("|.; 0) ∝ exp(5+ ", ., 0 ) *+("|.; 0) ∝ exp(7+(.) 8 9+(", 0)) where 기존: 매 새로운 질문마다 F 를 재계산 해야 함. 제안: H 는 미리 계산 될 수 있고 index (hash) 될 수도 있음 Decomposition
  • 84.
  • 85.
  • 86.
    5분간 듣는 PIQA1년 삽질기 1. Baseline 은 그리 어렵지 않았다 2. Duality의 활용 3. Multimodality… 4. Sparsity: 단번에 9% 업! 5. Scalability: 가능은 하지만 만만치 않은… 작년 6월부터
  • 87.
    Baseline 1: LSTM …water transforms into steam within a boiler … What does water turn into when heated? Document Question Bi-LSTM Bi-LSTM !" !# !$ !% !& !' !( Weighted Sum ) Nearest Neighbor
  • 88.
    Baseline 2: Self-Attention …water transforms into steam within a boiler … What does water turn into when heated? Document Question !" !# !$ !% !& !' !( “steam” “water” + “transform” + “boiler” “What” “water” + “turn” + “heated” type clue type clue )% * dot
  • 89.
    SQuAD F1 (%)EM (%) First Baseline 40.0 51.0 SOTA 91.2 85.4 PI-SQuAD F1 EM LSTM 57.2 46.8 LSTM+SA 59.8 49.0 Seo et al. Phrase-indexed question answering: a new challenge toward scalable document comprehension. EMNLP 2018.
  • 90.
  • 91.
    Barack Obama was44th president from 2009 to 2017. 일대다 관계 !, # $ $(!, #) Q1: Who was president in 2009? Q2: Who was the 44th president?
  • 92.
  • 93.
    Duality: Question Reconstruction Whatdoes water turn into when heated? Question Bi-LSTM !" !# !$ !% !& !' !( Weighted Sum ) Nearest Neighbor Generation seq2seq decoder (without attention)
  • 94.
    SQuAD F1 (%)EM (%) First Baseline 40.0 51.0 SOTA 91.2 85.4 PI-SQuAD F1 EM LSTM 57.2 46.8 LSTM+SA 59.8 49.0
  • 95.
    SQuAD F1 (%)EM (%) First Baseline 40.0 51.0 SOTA 91.2 85.4 PI-SQuAD F1 EM LSTM 57.2 46.8 LSTM+SA 59.8 49.0 LSTM+SA+Dual 63.2 52.0
  • 96.
  • 97.
    !"($|&; () Barack Obamawas 44th president from 2009 to 2017. Who was president in 2009? Who was the 44th president? *"($|&; () Multimodality
  • 98.
  • 99.
    Barack Obama was44th president from 2009 to 2017. Q1: Who was president in 2009? Q2: Who was the 44th president? Latent Variable 을 사용하면 된다? !, # $ $(!, &1, #) $(!, z2, #)
  • 100.
    그래서 (1년동안!) 시도해본 것들 1. Multiple identical models (ensemble) 2. Orthogonality regularization 3. Sequential decoding 4. Latent variable from Gaussian distribution 5. Latent variable from surrounding words
  • 101.
    그래서 (1년동안!) 시도해본 것들 1. Multiple identical models (ensemble) 2. Orthogonality regularization 3. Sequential decoding 4. Latent variable from Gaussian distribution 5. Latent variable from surrounding words 정확성을 좀 올려주지만, 30배 이상의 storage가 필요. 안됨…
  • 102.
    SQuAD F1 (%)EM (%) First Baseline 40.0 51.0 SOTA 91.2 85.4 PI-SQuAD F1 EM LSTM 57.2 46.8 LSTM+SA 59.8 49.0 LSTM+SA+Dual 63.2 52.0
  • 103.
    SQuAD F1 (%)EM (%) First Baseline 40.0 51.0 SOTA 91.2 85.4 PI-SQuAD F1 EM LSTM 57.2 46.8 LSTM+SA 59.8 49.0 LSTM+SA+Dual 63.2 52.0 LSTM+SA+Multi-mode 66.5 55.1
  • 104.
  • 105.
    Sparse vector “steam” “water”+ “transform” + “boiler” type clue !" steamboiler water transform Dense vector
  • 106.
    SQuAD F1 (%)EM (%) First Baseline 40.0 51.0 SOTA 91.2 85.4 PI-SQuAD F1 EM LSTM 57.2 46.8 LSTM+SA 59.8 49.0 LSTM+SA+Dual 63.2 52.0 LSTM+SA+Multi-mode 66.5 55.1
  • 107.
    SQuAD F1 (%)EM (%) First Baseline 40.0 51.0 SOTA 91.2 85.4 PI-SQuAD F1 EM LSTM 57.2 46.8 LSTM+SA 59.8 49.0 LSTM+SA+Dual 63.2 52.0 LSTM+SA+Multi-mode 66.5 55.1 LSTM+SA+Sparse+ELMo 69.3 58.7 To be on arXiv soon
  • 108.
    Scalability 고려사항 1 •SQuAD 는 문서 하나만 보는 것. à 벤치마크의 성격이 강함 • 실제 QA 시나리오가 아님. • End-to-end 가 Pipeline보다 더 나을거라는 보장? 추가 실험들이 필요!
  • 109.
    Scalability 고려사항 2 •영어 위키피디아 단어수: 30억개 • 단어당 구문수: 평균 7개 • 구문당 vector dimension: 1024 • Float32: 4 Byte 약 90 TB (210억개의 구문)
  • 110.
    Scalability 고려사항 2 •영어 위키피디아 단어수: 30억개 • 단어당 구문수: 평균 7개 • 구문당 vector dimension: 1024 • Float32: 4 Byte 최적화 가능 약 90 TB (210억개의 구문)
  • 111.
  • 112.
    Super Bowl 50!" American football game !# National Football League !$ Denver Broncos !% … Which NFL team represented the AFC at Super Bowl 50? & MIPS
  • 113.
    According to theAmerican Library Association, this makes … … tasked with drafting a European Charter of Human Rights, … 비슷한 타입의 고유명사 (lexical)
  • 114.
    The LM engineswere successfully test- fired and restarted, … Steam turbines were extensively applied … 비슷한 semantic (의미) 및 syntactic (문법) 구조
  • 115.
    … primarily accomplishedthrough the ductile stretching and thinning. … directly derived from the homogeneity or symmetry of space … 비슷한 syntactic (문법) 구조
  • 116.
    그러니까 결론이 뭐야? 검색과NLP의 아름다운 조화 아직 갈길은 멀지만, 같이 연구하고 고민해 보자구요! 나는 당장 잘되는게 필요하다구 둘 다 할게요 ㅜㅜ
  • 118.
    tl;dr: Representing the worldknowledge in an elegant way
  • 119.
  • 120.
  • 121.
  • 122.
    We are Hiring! Domains •Speech Recognition • Speech Synthesis • Computer Vision • Natural Language • NSML / AutoML • Finance AI • App/Web Services Positions • Research Scientist • Research Engineer • SW Engineer • Android / iOS Engineer • Backend Engineer • Data Engineer • UI/UX Engineer • Internship Member • Global Residency clova-jobs@navercorp.com