Noah A Smith - 2017 - Invited Keynote: Squashing Computational Linguistics

Association for Computational Linguistics
Association for Computational LinguisticsAssociation for Computational Linguistics
Squashing
Computational Linguistics
Noah A. Smith
Paul G. Allen School of Computer Science & Engineering
University of Washington
Seattle, USA
@nlpnoah
Research supported in part by: NSF, DARPA DEFT, DARPA CWC, Facebook, Google, Samsung, University of Washington.
data
Applications of NLP in 2017
• Conversation, IE, MT, QA, summarization, text categorization
Applications of NLP in 2017
• Conversation, IE, MT, QA, summarization, text categorization
• Machine-in-the-loop tools for (human) authors
Elizabeth Clark
Collaborate with an NLP
model through an
“exquisite corpse”
storytelling game
Chenhao Tan
Revise your
message with help
from NLP
tremoloop.com
Applications of NLP in 2017
• Conversation, IE, MT, QA, summarization, text categorization
• Machine-in-the-loop tools for (human) authors
• Analysis tools for measuring social phenomena
Lucy Lin
Sensationalism in
science news
bit.ly/sensational-news
… bookmark this survey!
Dallas Card
Track ideas, propositions,
frames in discourse over
time
data
?
Squash
Squash Networks
• Parameterized differentiable functions composed out of simpler
parameterized differentiable functions, some nonlinear
Squash Networks
• Parameterized differentiable functions composed out of simpler
parameterized differentiable functions, some nonlinear
From Jack (2010),Dynamic
System Modeling and
Control, goo.gl/pGvJPS
*Yes, rectified linear units (relus) are only half-squash; hat-tip Martha White.
Squash Networks
• Parameterized differentiable functions composed out of simpler
parameterized differentiable functions, some nonlinear
• Estimate parameters using Leibniz (1676)
From existentialcomics.com
Who wants an all-squash diet?
wow
very Curcurbita
much festive
many dropout
Noah A Smith - 2017 - Invited Keynote: Squashing Computational Linguistics
Linguistic Structure Prediction
input (text)
output (structure)
Linguistic Structure Prediction
input (text)
output (structure)
sequences,
trees,
graphs, …
“gold” output
Linguistic Structure Prediction
input (text)
output (structure)
sequences,
trees,
graphs, …
“gold” output
Linguistic Structure Prediction
input representation
input (text)
output (structure)
sequences,
trees,
graphs, …
“gold” output
Linguistic Structure Prediction
input representation
input (text)
output (structure)
clusters,lexicons,
embeddings, …
sequences,
trees,
graphs, …
“gold” output
Linguistic Structure Prediction
input representation
input (text)
output (structure)
training objective
clusters,lexicons,
embeddings, …
sequences,
trees,
graphs, …
“gold” output
Linguistic Structure Prediction
input representation
input (text)
output (structure)
training objective
clusters,lexicons,
embeddings, …
sequences,
trees,
graphs, …
probabilistic,
cost-aware,…
“gold” output
Linguistic Structure Prediction
input representation
input (text)
part representations
output (structure)
training objective
clusters,lexicons,
embeddings, …
sequences,
trees,
graphs, …
probabilistic,
cost-aware,…
“gold” output
Linguistic Structure Prediction
input representation
input (text)
part representations
output (structure)
training objective
clusters,lexicons,
embeddings, …
segments/spans, arcs,
graph fragments, …
sequences,
trees,
graphs, …
probabilistic,
cost-aware,…
“gold” output
Linguistic Structure Prediction
input representation
input (text)
part representations
output (structure)
training objective
“gold” output
Linguistic Structure Prediction
input representation
input (text)
part representations
output (structure)
training objective
error definitions
& weights
regularization
annotation
conventions
& theory
constraints &
independence
assumptions
dataselection
“task”
Inductive Bias
• What does your learning
algorithm assume?
• How will it choose among good
predictive functions?
See also:
No Free Lunch Theorem
(Mitchell, 1980;Wolpert, 1996)
data
bias
Three New Models
• Parsing sentences into
predicate-argument structures
• Fillmore frames
• Semantic dependency graphs
• Language models that
dynamically track entities
When Democrats wonder why there is so much resentment of Clinton, they don’t need to look much
further than the Big Lie about philandering that Stephanopoulos, Carville helped to put over in 1992.
Original story on Slate.com: http://goo.gl/Hp89tD
Frame-Semantic Analysis
When Democrats wonder why there is so much resentment of Clinton, they don’t need to look much
further than the Big Lie about philandering that Stephanopoulos, Carville helped to put over in 1992.
FrameNet: https://framenet.icsi.berkeley.edu
Frame-Semantic Analysis
When Democrats wonder why there is so much resentment of Clinton, they don’t need to look much
further than the Big Lie about philandering that Stephanopoulos, Carville helped to put over in 1992.
FrameNet: https://framenet.icsi.berkeley.edu
Frame-Semantic Analysis
When Democrats wonder why there is so much resentment of Clinton, they don’t need to look much
further than the Big Lie about philandering that Stephanopoulos, Carville helped to put over in 1992.
cognizer: Democrats
topic: why … Clinton
FrameNet: https://framenet.icsi.berkeley.edu
Frame-Semantic Analysis
When Democrats wonder why there is so much resentment of Clinton, they don’t need to look much
further than the Big Lie about philandering that Stephanopoulos, Carville helped to put over in 1992.
cognizer: Democrats
topic: why … Clinton
explanation: why
degree: so much
content: of Clinton
experiencer: ?
helper: Stephanopoulos … Carville
goal: to put over
time: in 1992
benefited_party: ?
FrameNet: https://framenet.icsi.berkeley.edu
landmark event: Democrats … Clinton
trajector event: they… 1992
entity: so … Clinton
degree: so
mass: resentment of Clinton
time: When … Clinton
required situation: they … to look … 1992
time: When … Clinton
cognizer agent: they
ground: much … 1992
sought entity: ?
topic: about … 1992
trajector event: the Big Lie … over
landmark period: 1992
FrameNet: https://framenet.icsi.berkeley.edu
brood, consider,
contemplate, deliberate, …
appraise,
assess,
evaluate, …
commit to
memory, learn,
memorize, …
agonize, fret, fuss,
lose sleep, …
translate bracket,
categorize,
class, classify
Frame-Semantic Analysis
When Democrats wonder why there is so much resentment of Clinton, they don’t need to look much
further than the Big Lie about philandering that Stephanopoulos, Carville helped to put over in 1992.
cognizer: Democrats
topic: why … Clinton
explanation: why
degree: so much
content: of Clinton
experiencer: ?
helper: Stephanopoulos … Carville
goal: to put over
time: in 1992
benefited_party: ?
FrameNet: https://framenet.icsi.berkeley.edu
landmark event: Democrats … Clinton
trajector event: they… 1992
entity: so … Clinton
degree: so
mass: resentment of Clinton
time: When … Clinton
required situation: they … to look … 1992
time: When … Clinton
cognizer agent: they
ground: much … 1992
sought entity: ?
topic: about … 1992
trajector event: the Big Lie … over
landmark period: 1992
When Democrats wonderCogitation why there is so much resentment of Clinton, they don’t need … words + frame
When Democrats wonderCogitation why there is so much resentment of Clinton, they don’t need …
biLSTM
(contextualized
word vectors)
words + frame
parts: segments
up to length d
scored by
another biLSTM,
with labels
When Democrats wonderCogitation why there is so much resentment of Clinton, they don’t need …
…
biLSTM
(contextualized
word vectors)
words + frame
parts: segments
up to length d
scored by
another biLSTM,
with labels
When Democrats wonderCogitation why there is so much resentment of Clinton, they don’t need …
…
biLSTM
(contextualized
word vectors)
words + frame
output: covering sequence of nonoverlapping segments
Segmental RNN
(Lingpeng Kong, Chris Dyer, N.A.S., ICLR 2016)
biLSTM
(contextualized
word vectors)
input sequence
parts: segments
up to length d
scored by
another biLSTM,
with labels
training objective:
log loss
output: covering sequence of nonoverlapping segments, recovered in O(Ldn); see Sarawagi & Cohen, 2004
…
When Democrats wonderCogitation why there is so much resentment of Clinton, they don’t need …
When Democrats wonderCogitation why there is so much resentment of Clinton, they don’t need …
When Democrats wonderCogitation why there is so much resentment of Clinton, they don’t need …
cognizer topic ∅
∅
∅
When Democrats wonderCogitation why there is so much resentment of Clinton, they don’t need …
cognizer topic ∅
wonderCogitation
∅
wonderCogitation
wonderCogitation
wonderCogitation
∅
When Democrats wonderCogitation why there is so much resentment of Clinton, they don’t need …
cognizer topic ∅
wonderCogitation
∅
wonderCogitation
wonderCogitation
wonderCogitation
∅
When Democrats wonderCogitation why there is so much resentment of Clinton, they don’t need …
cognizer topic ∅
wonderCogitation
∅
wonderCogitation
wonderCogitation
wonderCogitation
∅
When Democrats wonderCogitation why there is so much resentment of Clinton, they don’t need …
cognizer topic ∅
wonderCogitation
∅
wonderCogitation
wonderCogitation
wonderCogitation
∅
Inference via dynamic programming in O(Ldn)
When Democrats wonderCogitation why there is so much resentment of Clinton, they don’t need …
cognizer topic ∅
wonderCogitation
∅
wonderCogitation
wonderCogitation
wonderCogitation
∅
Open-SESAME
(Swabha Swayamdipta, Sam Thomson,
Chris Dyer, N.A.S., arXiv:1706.09528)
biLSTM
(contextualized
word vectors)
words + frame
parts: segments
with role labels,
scored by
another biLSTM
training objective:
recall-oriented
softmax margin
(Gimpel et al.,
2010)
output: labeled
argument spans
When Democrats wonderCogitation why there is so much resentment of Clinton, they don’t need …
cognizer: Democrats
topic: why there is so much resentment of Clinton
…
When Democrats wonderCogitation why there is so much resentment of Clinton, they don’t need …
cognizer topic ∅
wonderCogitation
∅
wonderCogitation
wonderCogitation
wonderCogitation
∅
When Democrats wonderCogitation why there is so much resentment of Clinton, they don’t need …
cognizer topic ∅
wonderCogitation
∅
wonderCogitation
wonderCogitation
wonderCogitation
∅
syntax features?
When Democrats wonderCogitation why there is so much resentment of Clinton, they don’t need …
yes no yes yesno
Penn Treebank (Marcus et al., 1993)
When Democrats wonderCogitation why there is so much resentment of Clinton, they don’t need …
cognizer topic ∅
wonderCogitation
∅
wonderCogitation
wonderCogitation
wonderCogitation
∅
main task
scaffold task
yes no yes yesno
Multitask Representation Learning
(Caruana, 1997)
“gold” output
input representation
input (text)
part representations
output (structure)
training objective
“gold” output
input representation
input (text)
part representations
output (structure)
training objective
main task: find and
label semantic
arguments
scaffold task: predict
syntactic constituents
shared
training datasets need not overlap
output structures need not be consistent
ours
65
66
67
68
69
70
71
72
SEMAFOR 1.0
(Das et al.,
2014)
SEMAFOR 2.0
(Kshirsagar et
al.,2015)
Open-SESAME … with
syntactic
scaffold
… with syntax
features
Framat (Roth,
2016)
FitzGerald et
al. (2015)
single model ensemble
F1 on frame-semantic parsing (frames & arguments), FrameNet 1.5 test set.
open-source systems
ours
65
66
67
68
69
70
71
72
SEMAFOR 1.0
(Das et al.,
2014)
SEMAFOR 2.0
(Kshirsagar et
al.,2015)
Open-SESAME … with
syntactic
scaffold
… with syntax
features
Framat (Roth,
2016)
FitzGerald et
al. (2015)
single model ensemble
F1 on frame-semantic parsing (frames & arguments), FrameNet 1.5 test set.
open-source systems
biLSTM
(contextualized
word vectors)
words + frame
training objective:
recall-oriented
softmax margin
(Gimpel et al.,
2010)
output: labeled
argument spans
When Democrats wonderCogitation why there is so much resentment of Clinton, they don’t need …
cognizer: Democrats
topic: why there is so much resentment of Clinton
…
segments
get scores
Bias?
parts: segments
with role labels,
scored by
another biLSTM
biLSTM
(contextualized
word vectors)
words + frame
training objective:
recall-oriented
softmax margin
(Gimpel et al.,
2010)
output: labeled
argument spans
When Democrats wonderCogitation why there is so much resentment of Clinton, they don’t need …
cognizer: Democrats
topic: why there is so much resentment of Clinton
…
segments
get scores
syntactic scaffold
Bias?
parts: segments
with role labels,
scored by
another biLSTM
biLSTM
(contextualized
word vectors)
words + frame
training objective:
recall-oriented
softmax margin
(Gimpel et al.,
2010)
output: labeled
argument spans
When Democrats wonderCogitation why there is so much resentment of Clinton, they don’t need …
cognizer: Democrats
topic: why there is so much resentment of Clinton
…
segments
get scores
recall-oriented
cost
syntactic scaffold
Bias?
parts: segments
with role labels,
scored by
another biLSTM
Semantic Dependency Graphs
(DELPH-IN minimal recursion semantics-derived representation; “DM”)
Oepen et al. (SemEval 2014; 2015),see also http://sdp.delph-in.net
When Democrats wonderwhy there is so much resentment of Clinton, they don’t need …
Democrats wonder
arg1
tanh(C[hwonder; hDemocrats] + b) · arg1
When Democrats wonderwhy there is so much resentment of Clinton, they don’t need …
When Democrats Democrats wonder why is
is resentment
so much much resentment
resentment of of Clinton
arg2 arg1
arg1
comp_so
arg1
arg1 arg1arg1
When Democrats wonderwhy there is so much resentment of Clinton, they don’t need …
When Democrats
wonder why
Democrats wonder why is
is resentment
so much much resentment
resentment of of Clinton
arg2 arg1
arg1
comp_so
arg1
arg1
arg1 arg1arg1
wonder there
arg1
wonder is
arg1
When Democrats wonderwhy there is so much resentment of Clinton, they don’t need …
Inference via AD3
(alternating directions dual decomposition; Martins et al., 2014)
When Democrats
wonder why
Democrats wonder why is
is resentment
so much much resentment
resentment of of Clinton
arg2 arg1
arg1
comp_so
arg1
arg1
arg1 arg1arg1
wonder there
arg1
wonder is
arg1
Neurboparser
(Hao Peng, Sam Thomson, N.A.S., ACL 2017)
biLSTM
(contextualized
word vectors)
words
parts: labeled
bilexical
dependencies
training objective:
structured hinge
loss
output: labeled
semantic
dependency
graph with
constraints
When Democrats wonderwhy there is so much resentment of Clinton, they don’t need …
…
When Democrats wonderwhy there is so much resentment of Clinton, they don’t need …
Three Formalisms, Three Separate Parsers
Democrats wonder
arg1
formalism 3 (PSD)
formalism 2 (PAS)
formalism 1 (DM)
Democrats wonder
arg1
Democrats wonder
act
When Democrats wonderwhy there is so much resentment of Clinton, they don’t need …
Shared Input Representations
(Daumé, 2007)
Democrats wonder
arg1
shared across all
formalism 3 (PSD)
formalism 2 (PAS)
formalism 1 (DM)
Democrats wonder
arg1
Democrats wonder
act
When Democrats wonderwhy there is so much resentment of Clinton, they don’t need …
Cross-Task Parts
Democrats wonder
arg1
Democrats wonder
arg1
Democrats wonder
act
shared across all
When Democrats wonderwhy there is so much resentment of Clinton, they don’t need …
Both: Shared Input Representations
& Cross-Task Parts
Democrats wonder
arg1
shared across all
formalism 3 (PSD)
formalism 2 (PAS)
formalism 1 (DM)
Democrats wonder
arg1
Democrats wonder
act
Multitask Learning: Many Possibilities
• Shared input representations, parts? Which parts?
• Joint decoding?
• Overlapping training data?
• Scaffold tasks?
“gold” output “gold” output“gold” output
formalism 1 (DM) formalism 2 (PAS) formalism 3 (PSD)
ours
80
81
82
83
84
85
86
87
88
Du et al.,2015 Almeida &
Martins, 2015
(no syntax)
Almeida &
Martins, 2015
(syntax)
Neurboparser … with shared
input
representations
and cross-task
parts
WSJ (in-domain) Brown (out-of-domain)
F1 averaged on three semantic dependency parsing formalisms, SemEval 2015 test set.
50
60
70
80
90
100
DM PAS PSD
WSJ (in-domain) Brown (out-of-domain)
Neurboparser F1 on three semantic dependency parsing formalisms, SemEval 2015 test set.
good enough?
biLSTM
(contextualized
word vectors)
words
training objective:
structured hinge
loss
output: labeled
semantic
dependency
graph with
constraints
When Democrats wonderwhy there is so much resentment of Clinton, they don’t need …
…
Bias? cross-formalism
sharing
parts: labeled
bilexical
dependencies
Text
Text ≠ Sentences
larger context
Generative Language Models
history next word
p(W | history)
Generative Language Models
history next word
p(W | history)
When Democrats wonderwhy there is so much resentment of
Entity Language Model
(Yangfeng Ji, Chenhao Tan, Sebastian Martschat,
Yejin Choi, N.A.S., EMNLP 2017)
Entity Language Model
(Yangfeng Ji, Chenhao Tan, Sebastian Martschat,
Yejin Choi, N.A.S., EMNLP 2017)
When Democrats wonderwhy there is so much resentment of
entity 1
Entity Language Model
(Yangfeng Ji, Chenhao Tan, Sebastian Martschat,
Yejin Choi, N.A.S., EMNLP 2017)
When Democrats wonderwhy there is so much resentment of Clinton,
1. new entity with new
vector
2. mention word will be
“Clinton”
entity 1
entity 2
Entity Language Model
(Yangfeng Ji, Chenhao Tan, Sebastian Martschat,
Yejin Choi, N.A.S., EMNLP 2017)
When Democrats wonderwhy there is so much resentment of Clinton,
entity 1
entity 2
Entity Language Model
(Yangfeng Ji, Chenhao Tan, Sebastian Martschat,
Yejin Choi, N.A.S., EMNLP 2017)
When Democrats wonderwhy there is so much resentment of Clinton, they
1. coreferent of entity 1
(previously known as
“Democrats”)
2. mention word will be
“they”
3. embedding of entity 1
will be updated
entity 1
entity 2
Entity Language Model
(Yangfeng Ji, Chenhao Tan, Sebastian Martschat,
Yejin Choi, N.A.S., EMNLP 2017)
When Democrats wonderwhy there is so much resentment of Clinton, they
entity 1
entity 2
Entity Language Model
(Yangfeng Ji, Chenhao Tan, Sebastian Martschat,
Yejin Choi, N.A.S., EMNLP 2017)
When Democrats wonderwhy there is so much resentment of Clinton, they don’t
1. not part of an entity
mention
2. “don’t”
entity 1
entity 2
128
132
136
140
5-gram LM RNN LM entity
language
model
Perplexity on CoNLL 2012 test set. CoNLL 2012 coreference evaluation.
50
55
60
65
70
75
Martschat and
Strube (2015)
reranked with entity
LM
CoNLL MUC F1 B3 CEAF
25
35
45
55
65
75
always
new
shallow
features
Modi et
al. (2017)
entity LM human
Accuracy: InScript.
history next word
p(W | history)
entities
Bias?
Bias in the Future?
• Linguistic scaffold tasks.
Bias in the Future?
• Linguistic scaffold tasks.
• Language is by and about people.
Bias in the Future?
• Linguistic scaffold tasks.
• Language is by and about people.
• NLP is needed when texts are costly to read.
Bias in the Future?
• Linguistic scaffold tasks.
• Language is by and about people.
• NLP is needed when texts are costly to read.
• Polyglot learning.
data
bias
Thank you!
1 of 90

Recommended

Joakim Nivre - 2017 - Presidential Address ACL 2017: Challenges for ACL by
Joakim Nivre - 2017 - Presidential Address ACL 2017: Challenges for ACLJoakim Nivre - 2017 - Presidential Address ACL 2017: Challenges for ACL
Joakim Nivre - 2017 - Presidential Address ACL 2017: Challenges for ACLAssociation for Computational Linguistics
4.7K views82 slides
ACL 2017 Opening Session by
ACL 2017 Opening Session ACL 2017 Opening Session
ACL 2017 Opening Session Association for Computational Linguistics
3.3K views25 slides
375 cc3 a_lindabeebe by
375 cc3 a_lindabeebe375 cc3 a_lindabeebe
375 cc3 a_lindabeebeSociety for Scholarly Publishing
290 views70 slides
2106 JWLLP by
2106 JWLLP2106 JWLLP
2106 JWLLPWarNik Chow
33 views30 slides
Gender diversity on management and supervisory boards by
Gender diversity on management and supervisory boardsGender diversity on management and supervisory boards
Gender diversity on management and supervisory boardsGRAPE
117 views65 slides
Pump em up aston by
Pump em up   astonPump em up   aston
Pump em up astoncarldjcross
216 views32 slides

More Related Content

Similar to Noah A Smith - 2017 - Invited Keynote: Squashing Computational Linguistics

Tim Estes - Generating dynamic social networks from large scale unstructured ... by
Tim Estes - Generating dynamic social networks from large scale unstructured ...Tim Estes - Generating dynamic social networks from large scale unstructured ...
Tim Estes - Generating dynamic social networks from large scale unstructured ...Digital Reasoning
954 views20 slides
Talking to your Data: Natural Language Interfaces for a schema-less world (Ke... by
Talking to your Data: Natural Language Interfaces for a schema-less world (Ke...Talking to your Data: Natural Language Interfaces for a schema-less world (Ke...
Talking to your Data: Natural Language Interfaces for a schema-less world (Ke...Andre Freitas
1.3K views77 slides
Data Science Workshop - day 1 by
Data Science Workshop - day 1Data Science Workshop - day 1
Data Science Workshop - day 1Aseel Addawood
304 views49 slides
Building AI Applications using Knowledge Graphs by
Building AI Applications using Knowledge GraphsBuilding AI Applications using Knowledge Graphs
Building AI Applications using Knowledge GraphsAndre Freitas
1.5K views291 slides
Ppt Pak kasim by
Ppt Pak kasimPpt Pak kasim
Ppt Pak kasimdarmigita
143 views50 slides

Similar to Noah A Smith - 2017 - Invited Keynote: Squashing Computational Linguistics (20)

Tim Estes - Generating dynamic social networks from large scale unstructured ... by Digital Reasoning
Tim Estes - Generating dynamic social networks from large scale unstructured ...Tim Estes - Generating dynamic social networks from large scale unstructured ...
Tim Estes - Generating dynamic social networks from large scale unstructured ...
Digital Reasoning954 views
Talking to your Data: Natural Language Interfaces for a schema-less world (Ke... by Andre Freitas
Talking to your Data: Natural Language Interfaces for a schema-less world (Ke...Talking to your Data: Natural Language Interfaces for a schema-less world (Ke...
Talking to your Data: Natural Language Interfaces for a schema-less world (Ke...
Andre Freitas1.3K views
Data Science Workshop - day 1 by Aseel Addawood
Data Science Workshop - day 1Data Science Workshop - day 1
Data Science Workshop - day 1
Aseel Addawood304 views
Building AI Applications using Knowledge Graphs by Andre Freitas
Building AI Applications using Knowledge GraphsBuilding AI Applications using Knowledge Graphs
Building AI Applications using Knowledge Graphs
Andre Freitas1.5K views
Ppt Pak kasim by darmigita
Ppt Pak kasimPpt Pak kasim
Ppt Pak kasim
darmigita143 views
What data scientists really do, according to 50 data scientists by Hugo Bowne-Anderson
What data scientists really do, according to 50 data scientistsWhat data scientists really do, according to 50 data scientists
What data scientists really do, according to 50 data scientists
Hugo Bowne-Anderson1.9K views
Why Watson Won: A cognitive perspective by James Hendler
Why Watson Won: A cognitive perspectiveWhy Watson Won: A cognitive perspective
Why Watson Won: A cognitive perspective
James Hendler4.8K views
Machine learning technology for publishing industry / Buchmesse 2018 by Maxim Kublitski
Machine learning technology for publishing industry / Buchmesse 2018Machine learning technology for publishing industry / Buchmesse 2018
Machine learning technology for publishing industry / Buchmesse 2018
Maxim Kublitski112 views
Silk data - machine learning by SaltoDigitale
Silk data - machine learning Silk data - machine learning
Silk data - machine learning
SaltoDigitale53 views
BSSML17 - Topic Models by BigML, Inc
BSSML17 - Topic ModelsBSSML17 - Topic Models
BSSML17 - Topic Models
BigML, Inc126 views
Data Science Workshop - day 2 by Aseel Addawood
Data Science Workshop - day 2Data Science Workshop - day 2
Data Science Workshop - day 2
Aseel Addawood176 views
Frontiers of Computational Journalism week 2 - Text Analysis by Jonathan Stray
Frontiers of Computational Journalism week 2 - Text AnalysisFrontiers of Computational Journalism week 2 - Text Analysis
Frontiers of Computational Journalism week 2 - Text Analysis
Jonathan Stray527 views
Session 01 designing and scoping a data science project by bodaceacat
Session 01 designing and scoping a data science projectSession 01 designing and scoping a data science project
Session 01 designing and scoping a data science project
bodaceacat562 views
Session 01 designing and scoping a data science project by Sara-Jayne Terp
Session 01 designing and scoping a data science projectSession 01 designing and scoping a data science project
Session 01 designing and scoping a data science project
Sara-Jayne Terp990 views
Modern text mining – understanding a million comments in 60 minutes by ZOLLHOF - Tech Incubator
Modern text mining – understanding a million comments in 60 minutesModern text mining – understanding a million comments in 60 minutes
Modern text mining – understanding a million comments in 60 minutes

More from Association for Computational Linguistics

Muis - 2016 - Weak Semi-Markov CRFs for NP Chunking in Informal Text by
Muis - 2016 - Weak Semi-Markov CRFs for NP Chunking in Informal TextMuis - 2016 - Weak Semi-Markov CRFs for NP Chunking in Informal Text
Muis - 2016 - Weak Semi-Markov CRFs for NP Chunking in Informal TextAssociation for Computational Linguistics
644 views28 slides
Castro - 2018 - A High Coverage Method for Automatic False Friends Detection ... by
Castro - 2018 - A High Coverage Method for Automatic False Friends Detection ...Castro - 2018 - A High Coverage Method for Automatic False Friends Detection ...
Castro - 2018 - A High Coverage Method for Automatic False Friends Detection ...Association for Computational Linguistics
319 views23 slides
Castro - 2018 - A Crowd-Annotated Spanish Corpus for Humour Analysis by
Castro - 2018 - A Crowd-Annotated Spanish Corpus for Humour AnalysisCastro - 2018 - A Crowd-Annotated Spanish Corpus for Humour Analysis
Castro - 2018 - A Crowd-Annotated Spanish Corpus for Humour AnalysisAssociation for Computational Linguistics
317 views33 slides
Muthu Kumar Chandrasekaran - 2018 - Countering Position Bias in Instructor In... by
Muthu Kumar Chandrasekaran - 2018 - Countering Position Bias in Instructor In...Muthu Kumar Chandrasekaran - 2018 - Countering Position Bias in Instructor In...
Muthu Kumar Chandrasekaran - 2018 - Countering Position Bias in Instructor In...Association for Computational Linguistics
445 views1 slide
Daniel Gildea - 2018 - The ACL Anthology: Current State and Future Directions by
Daniel Gildea - 2018 - The ACL Anthology: Current State and Future DirectionsDaniel Gildea - 2018 - The ACL Anthology: Current State and Future Directions
Daniel Gildea - 2018 - The ACL Anthology: Current State and Future DirectionsAssociation for Computational Linguistics
160 views9 slides
Elior Sulem - 2018 - Semantic Structural Evaluation for Text Simplification by
Elior Sulem - 2018 - Semantic Structural Evaluation for Text SimplificationElior Sulem - 2018 - Semantic Structural Evaluation for Text Simplification
Elior Sulem - 2018 - Semantic Structural Evaluation for Text SimplificationAssociation for Computational Linguistics
366 views30 slides

More from Association for Computational Linguistics(20)

Recently uploaded

11.30.23A Poverty and Inequality in America.pptx by
11.30.23A Poverty and Inequality in America.pptx11.30.23A Poverty and Inequality in America.pptx
11.30.23A Poverty and Inequality in America.pptxmary850239
130 views18 slides
Berry country.pdf by
Berry country.pdfBerry country.pdf
Berry country.pdfMariaKenney3
75 views12 slides
Thanksgiving!.pdf by
Thanksgiving!.pdfThanksgiving!.pdf
Thanksgiving!.pdfEnglishCEIPdeSigeiro
500 views17 slides
BÀI TẬP BỔ TRỢ TIẾNG ANH 11 THEO ĐƠN VỊ BÀI HỌC - CẢ NĂM - CÓ FILE NGHE (FRIE... by
BÀI TẬP BỔ TRỢ TIẾNG ANH 11 THEO ĐƠN VỊ BÀI HỌC - CẢ NĂM - CÓ FILE NGHE (FRIE...BÀI TẬP BỔ TRỢ TIẾNG ANH 11 THEO ĐƠN VỊ BÀI HỌC - CẢ NĂM - CÓ FILE NGHE (FRIE...
BÀI TẬP BỔ TRỢ TIẾNG ANH 11 THEO ĐƠN VỊ BÀI HỌC - CẢ NĂM - CÓ FILE NGHE (FRIE...Nguyen Thanh Tu Collection
89 views91 slides
Six Sigma Concept by Sahil Srivastava.pptx by
Six Sigma Concept by Sahil Srivastava.pptxSix Sigma Concept by Sahil Srivastava.pptx
Six Sigma Concept by Sahil Srivastava.pptxSahil Srivastava
44 views11 slides
UNIDAD 3 6º C.MEDIO.pptx by
UNIDAD 3 6º C.MEDIO.pptxUNIDAD 3 6º C.MEDIO.pptx
UNIDAD 3 6º C.MEDIO.pptxMarcosRodriguezUcedo
146 views32 slides

Recently uploaded(20)

11.30.23A Poverty and Inequality in America.pptx by mary850239
11.30.23A Poverty and Inequality in America.pptx11.30.23A Poverty and Inequality in America.pptx
11.30.23A Poverty and Inequality in America.pptx
mary850239130 views
BÀI TẬP BỔ TRỢ TIẾNG ANH 11 THEO ĐƠN VỊ BÀI HỌC - CẢ NĂM - CÓ FILE NGHE (FRIE... by Nguyen Thanh Tu Collection
BÀI TẬP BỔ TRỢ TIẾNG ANH 11 THEO ĐƠN VỊ BÀI HỌC - CẢ NĂM - CÓ FILE NGHE (FRIE...BÀI TẬP BỔ TRỢ TIẾNG ANH 11 THEO ĐƠN VỊ BÀI HỌC - CẢ NĂM - CÓ FILE NGHE (FRIE...
BÀI TẬP BỔ TRỢ TIẾNG ANH 11 THEO ĐƠN VỊ BÀI HỌC - CẢ NĂM - CÓ FILE NGHE (FRIE...
Six Sigma Concept by Sahil Srivastava.pptx by Sahil Srivastava
Six Sigma Concept by Sahil Srivastava.pptxSix Sigma Concept by Sahil Srivastava.pptx
Six Sigma Concept by Sahil Srivastava.pptx
Sahil Srivastava44 views
INT-244 Topic 6b Confucianism by S Meyer
INT-244 Topic 6b ConfucianismINT-244 Topic 6b Confucianism
INT-244 Topic 6b Confucianism
S Meyer45 views
Nelson_RecordStore.pdf by BrynNelson5
Nelson_RecordStore.pdfNelson_RecordStore.pdf
Nelson_RecordStore.pdf
BrynNelson546 views
Monthly Information Session for MV Asterix (November) by Esquimalt MFRC
Monthly Information Session for MV Asterix (November)Monthly Information Session for MV Asterix (November)
Monthly Information Session for MV Asterix (November)
Esquimalt MFRC107 views
Retail Store Scavenger Hunt.pptx by jmurphy154
Retail Store Scavenger Hunt.pptxRetail Store Scavenger Hunt.pptx
Retail Store Scavenger Hunt.pptx
jmurphy15452 views
NodeJS and ExpressJS.pdf by ArthyR3
NodeJS and ExpressJS.pdfNodeJS and ExpressJS.pdf
NodeJS and ExpressJS.pdf
ArthyR348 views
EILO EXCURSION PROGRAMME 2023 by info33492
EILO EXCURSION PROGRAMME 2023EILO EXCURSION PROGRAMME 2023
EILO EXCURSION PROGRAMME 2023
info33492202 views
Narration lesson plan by TARIQ KHAN
Narration lesson planNarration lesson plan
Narration lesson plan
TARIQ KHAN75 views
Education of marginalized and socially disadvantages segments.pptx by GarimaBhati5
Education of marginalized and socially disadvantages segments.pptxEducation of marginalized and socially disadvantages segments.pptx
Education of marginalized and socially disadvantages segments.pptx
GarimaBhati543 views
Payment Integration using Braintree Connector | MuleSoft Mysore Meetup #37 by MysoreMuleSoftMeetup
Payment Integration using Braintree Connector | MuleSoft Mysore Meetup #37Payment Integration using Braintree Connector | MuleSoft Mysore Meetup #37
Payment Integration using Braintree Connector | MuleSoft Mysore Meetup #37

Noah A Smith - 2017 - Invited Keynote: Squashing Computational Linguistics

  • 1. Squashing Computational Linguistics Noah A. Smith Paul G. Allen School of Computer Science & Engineering University of Washington Seattle, USA @nlpnoah Research supported in part by: NSF, DARPA DEFT, DARPA CWC, Facebook, Google, Samsung, University of Washington.
  • 3. Applications of NLP in 2017 • Conversation, IE, MT, QA, summarization, text categorization
  • 4. Applications of NLP in 2017 • Conversation, IE, MT, QA, summarization, text categorization • Machine-in-the-loop tools for (human) authors Elizabeth Clark Collaborate with an NLP model through an “exquisite corpse” storytelling game Chenhao Tan Revise your message with help from NLP tremoloop.com
  • 5. Applications of NLP in 2017 • Conversation, IE, MT, QA, summarization, text categorization • Machine-in-the-loop tools for (human) authors • Analysis tools for measuring social phenomena Lucy Lin Sensationalism in science news bit.ly/sensational-news … bookmark this survey! Dallas Card Track ideas, propositions, frames in discourse over time
  • 8. Squash Networks • Parameterized differentiable functions composed out of simpler parameterized differentiable functions, some nonlinear
  • 9. Squash Networks • Parameterized differentiable functions composed out of simpler parameterized differentiable functions, some nonlinear From Jack (2010),Dynamic System Modeling and Control, goo.gl/pGvJPS *Yes, rectified linear units (relus) are only half-squash; hat-tip Martha White.
  • 10. Squash Networks • Parameterized differentiable functions composed out of simpler parameterized differentiable functions, some nonlinear • Estimate parameters using Leibniz (1676) From existentialcomics.com
  • 11. Who wants an all-squash diet? wow very Curcurbita much festive many dropout
  • 13. Linguistic Structure Prediction input (text) output (structure)
  • 14. Linguistic Structure Prediction input (text) output (structure) sequences, trees, graphs, …
  • 15. “gold” output Linguistic Structure Prediction input (text) output (structure) sequences, trees, graphs, …
  • 16. “gold” output Linguistic Structure Prediction input representation input (text) output (structure) sequences, trees, graphs, …
  • 17. “gold” output Linguistic Structure Prediction input representation input (text) output (structure) clusters,lexicons, embeddings, … sequences, trees, graphs, …
  • 18. “gold” output Linguistic Structure Prediction input representation input (text) output (structure) training objective clusters,lexicons, embeddings, … sequences, trees, graphs, …
  • 19. “gold” output Linguistic Structure Prediction input representation input (text) output (structure) training objective clusters,lexicons, embeddings, … sequences, trees, graphs, … probabilistic, cost-aware,…
  • 20. “gold” output Linguistic Structure Prediction input representation input (text) part representations output (structure) training objective clusters,lexicons, embeddings, … sequences, trees, graphs, … probabilistic, cost-aware,…
  • 21. “gold” output Linguistic Structure Prediction input representation input (text) part representations output (structure) training objective clusters,lexicons, embeddings, … segments/spans, arcs, graph fragments, … sequences, trees, graphs, … probabilistic, cost-aware,…
  • 22. “gold” output Linguistic Structure Prediction input representation input (text) part representations output (structure) training objective
  • 23. “gold” output Linguistic Structure Prediction input representation input (text) part representations output (structure) training objective error definitions & weights regularization annotation conventions & theory constraints & independence assumptions dataselection “task”
  • 24. Inductive Bias • What does your learning algorithm assume? • How will it choose among good predictive functions? See also: No Free Lunch Theorem (Mitchell, 1980;Wolpert, 1996)
  • 26. Three New Models • Parsing sentences into predicate-argument structures • Fillmore frames • Semantic dependency graphs • Language models that dynamically track entities
  • 27. When Democrats wonder why there is so much resentment of Clinton, they don’t need to look much further than the Big Lie about philandering that Stephanopoulos, Carville helped to put over in 1992. Original story on Slate.com: http://goo.gl/Hp89tD
  • 28. Frame-Semantic Analysis When Democrats wonder why there is so much resentment of Clinton, they don’t need to look much further than the Big Lie about philandering that Stephanopoulos, Carville helped to put over in 1992. FrameNet: https://framenet.icsi.berkeley.edu
  • 29. Frame-Semantic Analysis When Democrats wonder why there is so much resentment of Clinton, they don’t need to look much further than the Big Lie about philandering that Stephanopoulos, Carville helped to put over in 1992. FrameNet: https://framenet.icsi.berkeley.edu
  • 30. Frame-Semantic Analysis When Democrats wonder why there is so much resentment of Clinton, they don’t need to look much further than the Big Lie about philandering that Stephanopoulos, Carville helped to put over in 1992. cognizer: Democrats topic: why … Clinton FrameNet: https://framenet.icsi.berkeley.edu
  • 31. Frame-Semantic Analysis When Democrats wonder why there is so much resentment of Clinton, they don’t need to look much further than the Big Lie about philandering that Stephanopoulos, Carville helped to put over in 1992. cognizer: Democrats topic: why … Clinton explanation: why degree: so much content: of Clinton experiencer: ? helper: Stephanopoulos … Carville goal: to put over time: in 1992 benefited_party: ? FrameNet: https://framenet.icsi.berkeley.edu landmark event: Democrats … Clinton trajector event: they… 1992 entity: so … Clinton degree: so mass: resentment of Clinton time: When … Clinton required situation: they … to look … 1992 time: When … Clinton cognizer agent: they ground: much … 1992 sought entity: ? topic: about … 1992 trajector event: the Big Lie … over landmark period: 1992
  • 32. FrameNet: https://framenet.icsi.berkeley.edu brood, consider, contemplate, deliberate, … appraise, assess, evaluate, … commit to memory, learn, memorize, … agonize, fret, fuss, lose sleep, … translate bracket, categorize, class, classify
  • 33. Frame-Semantic Analysis When Democrats wonder why there is so much resentment of Clinton, they don’t need to look much further than the Big Lie about philandering that Stephanopoulos, Carville helped to put over in 1992. cognizer: Democrats topic: why … Clinton explanation: why degree: so much content: of Clinton experiencer: ? helper: Stephanopoulos … Carville goal: to put over time: in 1992 benefited_party: ? FrameNet: https://framenet.icsi.berkeley.edu landmark event: Democrats … Clinton trajector event: they… 1992 entity: so … Clinton degree: so mass: resentment of Clinton time: When … Clinton required situation: they … to look … 1992 time: When … Clinton cognizer agent: they ground: much … 1992 sought entity: ? topic: about … 1992 trajector event: the Big Lie … over landmark period: 1992
  • 34. When Democrats wonderCogitation why there is so much resentment of Clinton, they don’t need … words + frame
  • 35. When Democrats wonderCogitation why there is so much resentment of Clinton, they don’t need … biLSTM (contextualized word vectors) words + frame
  • 36. parts: segments up to length d scored by another biLSTM, with labels When Democrats wonderCogitation why there is so much resentment of Clinton, they don’t need … … biLSTM (contextualized word vectors) words + frame
  • 37. parts: segments up to length d scored by another biLSTM, with labels When Democrats wonderCogitation why there is so much resentment of Clinton, they don’t need … … biLSTM (contextualized word vectors) words + frame output: covering sequence of nonoverlapping segments
  • 38. Segmental RNN (Lingpeng Kong, Chris Dyer, N.A.S., ICLR 2016) biLSTM (contextualized word vectors) input sequence parts: segments up to length d scored by another biLSTM, with labels training objective: log loss output: covering sequence of nonoverlapping segments, recovered in O(Ldn); see Sarawagi & Cohen, 2004 …
  • 39. When Democrats wonderCogitation why there is so much resentment of Clinton, they don’t need …
  • 40. When Democrats wonderCogitation why there is so much resentment of Clinton, they don’t need …
  • 41. When Democrats wonderCogitation why there is so much resentment of Clinton, they don’t need … cognizer topic ∅ ∅ ∅
  • 42. When Democrats wonderCogitation why there is so much resentment of Clinton, they don’t need … cognizer topic ∅ wonderCogitation ∅ wonderCogitation wonderCogitation wonderCogitation ∅
  • 43. When Democrats wonderCogitation why there is so much resentment of Clinton, they don’t need … cognizer topic ∅ wonderCogitation ∅ wonderCogitation wonderCogitation wonderCogitation ∅
  • 44. When Democrats wonderCogitation why there is so much resentment of Clinton, they don’t need … cognizer topic ∅ wonderCogitation ∅ wonderCogitation wonderCogitation wonderCogitation ∅
  • 45. When Democrats wonderCogitation why there is so much resentment of Clinton, they don’t need … cognizer topic ∅ wonderCogitation ∅ wonderCogitation wonderCogitation wonderCogitation ∅
  • 46. Inference via dynamic programming in O(Ldn) When Democrats wonderCogitation why there is so much resentment of Clinton, they don’t need … cognizer topic ∅ wonderCogitation ∅ wonderCogitation wonderCogitation wonderCogitation ∅
  • 47. Open-SESAME (Swabha Swayamdipta, Sam Thomson, Chris Dyer, N.A.S., arXiv:1706.09528) biLSTM (contextualized word vectors) words + frame parts: segments with role labels, scored by another biLSTM training objective: recall-oriented softmax margin (Gimpel et al., 2010) output: labeled argument spans When Democrats wonderCogitation why there is so much resentment of Clinton, they don’t need … cognizer: Democrats topic: why there is so much resentment of Clinton …
  • 48. When Democrats wonderCogitation why there is so much resentment of Clinton, they don’t need … cognizer topic ∅ wonderCogitation ∅ wonderCogitation wonderCogitation wonderCogitation ∅
  • 49. When Democrats wonderCogitation why there is so much resentment of Clinton, they don’t need … cognizer topic ∅ wonderCogitation ∅ wonderCogitation wonderCogitation wonderCogitation ∅ syntax features?
  • 50. When Democrats wonderCogitation why there is so much resentment of Clinton, they don’t need … yes no yes yesno Penn Treebank (Marcus et al., 1993)
  • 51. When Democrats wonderCogitation why there is so much resentment of Clinton, they don’t need … cognizer topic ∅ wonderCogitation ∅ wonderCogitation wonderCogitation wonderCogitation ∅ main task scaffold task yes no yes yesno
  • 52. Multitask Representation Learning (Caruana, 1997) “gold” output input representation input (text) part representations output (structure) training objective “gold” output input representation input (text) part representations output (structure) training objective main task: find and label semantic arguments scaffold task: predict syntactic constituents shared training datasets need not overlap output structures need not be consistent
  • 53. ours 65 66 67 68 69 70 71 72 SEMAFOR 1.0 (Das et al., 2014) SEMAFOR 2.0 (Kshirsagar et al.,2015) Open-SESAME … with syntactic scaffold … with syntax features Framat (Roth, 2016) FitzGerald et al. (2015) single model ensemble F1 on frame-semantic parsing (frames & arguments), FrameNet 1.5 test set. open-source systems
  • 54. ours 65 66 67 68 69 70 71 72 SEMAFOR 1.0 (Das et al., 2014) SEMAFOR 2.0 (Kshirsagar et al.,2015) Open-SESAME … with syntactic scaffold … with syntax features Framat (Roth, 2016) FitzGerald et al. (2015) single model ensemble F1 on frame-semantic parsing (frames & arguments), FrameNet 1.5 test set. open-source systems
  • 55. biLSTM (contextualized word vectors) words + frame training objective: recall-oriented softmax margin (Gimpel et al., 2010) output: labeled argument spans When Democrats wonderCogitation why there is so much resentment of Clinton, they don’t need … cognizer: Democrats topic: why there is so much resentment of Clinton … segments get scores Bias? parts: segments with role labels, scored by another biLSTM
  • 56. biLSTM (contextualized word vectors) words + frame training objective: recall-oriented softmax margin (Gimpel et al., 2010) output: labeled argument spans When Democrats wonderCogitation why there is so much resentment of Clinton, they don’t need … cognizer: Democrats topic: why there is so much resentment of Clinton … segments get scores syntactic scaffold Bias? parts: segments with role labels, scored by another biLSTM
  • 57. biLSTM (contextualized word vectors) words + frame training objective: recall-oriented softmax margin (Gimpel et al., 2010) output: labeled argument spans When Democrats wonderCogitation why there is so much resentment of Clinton, they don’t need … cognizer: Democrats topic: why there is so much resentment of Clinton … segments get scores recall-oriented cost syntactic scaffold Bias? parts: segments with role labels, scored by another biLSTM
  • 58. Semantic Dependency Graphs (DELPH-IN minimal recursion semantics-derived representation; “DM”) Oepen et al. (SemEval 2014; 2015),see also http://sdp.delph-in.net
  • 59. When Democrats wonderwhy there is so much resentment of Clinton, they don’t need … Democrats wonder arg1 tanh(C[hwonder; hDemocrats] + b) · arg1
  • 60. When Democrats wonderwhy there is so much resentment of Clinton, they don’t need … When Democrats Democrats wonder why is is resentment so much much resentment resentment of of Clinton arg2 arg1 arg1 comp_so arg1 arg1 arg1arg1
  • 61. When Democrats wonderwhy there is so much resentment of Clinton, they don’t need … When Democrats wonder why Democrats wonder why is is resentment so much much resentment resentment of of Clinton arg2 arg1 arg1 comp_so arg1 arg1 arg1 arg1arg1 wonder there arg1 wonder is arg1
  • 62. When Democrats wonderwhy there is so much resentment of Clinton, they don’t need … Inference via AD3 (alternating directions dual decomposition; Martins et al., 2014) When Democrats wonder why Democrats wonder why is is resentment so much much resentment resentment of of Clinton arg2 arg1 arg1 comp_so arg1 arg1 arg1 arg1arg1 wonder there arg1 wonder is arg1
  • 63. Neurboparser (Hao Peng, Sam Thomson, N.A.S., ACL 2017) biLSTM (contextualized word vectors) words parts: labeled bilexical dependencies training objective: structured hinge loss output: labeled semantic dependency graph with constraints When Democrats wonderwhy there is so much resentment of Clinton, they don’t need … …
  • 64. When Democrats wonderwhy there is so much resentment of Clinton, they don’t need … Three Formalisms, Three Separate Parsers Democrats wonder arg1 formalism 3 (PSD) formalism 2 (PAS) formalism 1 (DM) Democrats wonder arg1 Democrats wonder act
  • 65. When Democrats wonderwhy there is so much resentment of Clinton, they don’t need … Shared Input Representations (Daumé, 2007) Democrats wonder arg1 shared across all formalism 3 (PSD) formalism 2 (PAS) formalism 1 (DM) Democrats wonder arg1 Democrats wonder act
  • 66. When Democrats wonderwhy there is so much resentment of Clinton, they don’t need … Cross-Task Parts Democrats wonder arg1 Democrats wonder arg1 Democrats wonder act shared across all
  • 67. When Democrats wonderwhy there is so much resentment of Clinton, they don’t need … Both: Shared Input Representations & Cross-Task Parts Democrats wonder arg1 shared across all formalism 3 (PSD) formalism 2 (PAS) formalism 1 (DM) Democrats wonder arg1 Democrats wonder act
  • 68. Multitask Learning: Many Possibilities • Shared input representations, parts? Which parts? • Joint decoding? • Overlapping training data? • Scaffold tasks? “gold” output “gold” output“gold” output formalism 1 (DM) formalism 2 (PAS) formalism 3 (PSD)
  • 69. ours 80 81 82 83 84 85 86 87 88 Du et al.,2015 Almeida & Martins, 2015 (no syntax) Almeida & Martins, 2015 (syntax) Neurboparser … with shared input representations and cross-task parts WSJ (in-domain) Brown (out-of-domain) F1 averaged on three semantic dependency parsing formalisms, SemEval 2015 test set.
  • 70. 50 60 70 80 90 100 DM PAS PSD WSJ (in-domain) Brown (out-of-domain) Neurboparser F1 on three semantic dependency parsing formalisms, SemEval 2015 test set. good enough?
  • 71. biLSTM (contextualized word vectors) words training objective: structured hinge loss output: labeled semantic dependency graph with constraints When Democrats wonderwhy there is so much resentment of Clinton, they don’t need … … Bias? cross-formalism sharing parts: labeled bilexical dependencies
  • 72. Text
  • 74. Generative Language Models history next word p(W | history)
  • 75. Generative Language Models history next word p(W | history)
  • 76. When Democrats wonderwhy there is so much resentment of Entity Language Model (Yangfeng Ji, Chenhao Tan, Sebastian Martschat, Yejin Choi, N.A.S., EMNLP 2017)
  • 77. Entity Language Model (Yangfeng Ji, Chenhao Tan, Sebastian Martschat, Yejin Choi, N.A.S., EMNLP 2017) When Democrats wonderwhy there is so much resentment of entity 1
  • 78. Entity Language Model (Yangfeng Ji, Chenhao Tan, Sebastian Martschat, Yejin Choi, N.A.S., EMNLP 2017) When Democrats wonderwhy there is so much resentment of Clinton, 1. new entity with new vector 2. mention word will be “Clinton” entity 1 entity 2
  • 79. Entity Language Model (Yangfeng Ji, Chenhao Tan, Sebastian Martschat, Yejin Choi, N.A.S., EMNLP 2017) When Democrats wonderwhy there is so much resentment of Clinton, entity 1 entity 2
  • 80. Entity Language Model (Yangfeng Ji, Chenhao Tan, Sebastian Martschat, Yejin Choi, N.A.S., EMNLP 2017) When Democrats wonderwhy there is so much resentment of Clinton, they 1. coreferent of entity 1 (previously known as “Democrats”) 2. mention word will be “they” 3. embedding of entity 1 will be updated entity 1 entity 2
  • 81. Entity Language Model (Yangfeng Ji, Chenhao Tan, Sebastian Martschat, Yejin Choi, N.A.S., EMNLP 2017) When Democrats wonderwhy there is so much resentment of Clinton, they entity 1 entity 2
  • 82. Entity Language Model (Yangfeng Ji, Chenhao Tan, Sebastian Martschat, Yejin Choi, N.A.S., EMNLP 2017) When Democrats wonderwhy there is so much resentment of Clinton, they don’t 1. not part of an entity mention 2. “don’t” entity 1 entity 2
  • 83. 128 132 136 140 5-gram LM RNN LM entity language model Perplexity on CoNLL 2012 test set. CoNLL 2012 coreference evaluation. 50 55 60 65 70 75 Martschat and Strube (2015) reranked with entity LM CoNLL MUC F1 B3 CEAF 25 35 45 55 65 75 always new shallow features Modi et al. (2017) entity LM human Accuracy: InScript.
  • 84. history next word p(W | history) entities Bias?
  • 85. Bias in the Future? • Linguistic scaffold tasks.
  • 86. Bias in the Future? • Linguistic scaffold tasks. • Language is by and about people.
  • 87. Bias in the Future? • Linguistic scaffold tasks. • Language is by and about people. • NLP is needed when texts are costly to read.
  • 88. Bias in the Future? • Linguistic scaffold tasks. • Language is by and about people. • NLP is needed when texts are costly to read. • Polyglot learning.