Squashing
Computational Linguistics
Noah A. Smith
Paul G. Allen School of Computer Science & Engineering
University of Washington
Seattle, USA
@nlpnoah
Research supported in part by: NSF, DARPA DEFT, DARPA CWC, Facebook, Google, Samsung, University of Washington.
data
Applications of NLP in 2017
• Conversation, IE, MT, QA, summarization, text categorization
Applications of NLP in 2017
• Conversation, IE, MT, QA, summarization, text categorization
• Machine-in-the-loop tools for (human) authors
Elizabeth Clark
Collaborate with an NLP
model through an
“exquisite corpse”
storytelling game
Chenhao Tan
Revise your
message with help
from NLP
tremoloop.com
Applications of NLP in 2017
• Conversation, IE, MT, QA, summarization, text categorization
• Machine-in-the-loop tools for (human) authors
• Analysis tools for measuring social phenomena
Lucy Lin
Sensationalism in
science news
bit.ly/sensational-news
… bookmark this survey!
Dallas Card
Track ideas, propositions,
frames in discourse over
time
data
?
Squash
Squash Networks
• Parameterized differentiable functions composed out of simpler
parameterized differentiable functions, some nonlinear
Squash Networks
• Parameterized differentiable functions composed out of simpler
parameterized differentiable functions, some nonlinear
From Jack (2010),Dynamic
System Modeling and
Control, goo.gl/pGvJPS
*Yes, rectified linear units (relus) are only half-squash; hat-tip Martha White.
Squash Networks
• Parameterized differentiable functions composed out of simpler
parameterized differentiable functions, some nonlinear
• Estimate parameters using Leibniz (1676)
From existentialcomics.com
Who wants an all-squash diet?
wow
very Curcurbita
much festive
many dropout
Linguistic Structure Prediction
input (text)
output (structure)
Linguistic Structure Prediction
input (text)
output (structure)
sequences,
trees,
graphs, …
“gold” output
Linguistic Structure Prediction
input (text)
output (structure)
sequences,
trees,
graphs, …
“gold” output
Linguistic Structure Prediction
input representation
input (text)
output (structure)
sequences,
trees,
graphs, …
“gold” output
Linguistic Structure Prediction
input representation
input (text)
output (structure)
clusters,lexicons,
embeddings, …
sequences,
trees,
graphs, …
“gold” output
Linguistic Structure Prediction
input representation
input (text)
output (structure)
training objective
clusters,lexicons,
embeddings, …
sequences,
trees,
graphs, …
“gold” output
Linguistic Structure Prediction
input representation
input (text)
output (structure)
training objective
clusters,lexicons,
embeddings, …
sequences,
trees,
graphs, …
probabilistic,
cost-aware,…
“gold” output
Linguistic Structure Prediction
input representation
input (text)
part representations
output (structure)
training objective
clusters,lexicons,
embeddings, …
sequences,
trees,
graphs, …
probabilistic,
cost-aware,…
“gold” output
Linguistic Structure Prediction
input representation
input (text)
part representations
output (structure)
training objective
clusters,lexicons,
embeddings, …
segments/spans, arcs,
graph fragments, …
sequences,
trees,
graphs, …
probabilistic,
cost-aware,…
“gold” output
Linguistic Structure Prediction
input representation
input (text)
part representations
output (structure)
training objective
“gold” output
Linguistic Structure Prediction
input representation
input (text)
part representations
output (structure)
training objective
error definitions
& weights
regularization
annotation
conventions
& theory
constraints &
independence
assumptions
dataselection
“task”
Inductive Bias
• What does your learning
algorithm assume?
• How will it choose among good
predictive functions?
See also:
No Free Lunch Theorem
(Mitchell, 1980;Wolpert, 1996)
data
bias
Three New Models
• Parsing sentences into
predicate-argument structures
• Fillmore frames
• Semantic dependency graphs
• Language models that
dynamically track entities
When Democrats wonder why there is so much resentment of Clinton, they don’t need to look much
further than the Big Lie about philandering that Stephanopoulos, Carville helped to put over in 1992.
Original story on Slate.com: http://goo.gl/Hp89tD
Frame-Semantic Analysis
When Democrats wonder why there is so much resentment of Clinton, they don’t need to look much
further than the Big Lie about philandering that Stephanopoulos, Carville helped to put over in 1992.
FrameNet: https://framenet.icsi.berkeley.edu
Frame-Semantic Analysis
When Democrats wonder why there is so much resentment of Clinton, they don’t need to look much
further than the Big Lie about philandering that Stephanopoulos, Carville helped to put over in 1992.
FrameNet: https://framenet.icsi.berkeley.edu
Frame-Semantic Analysis
When Democrats wonder why there is so much resentment of Clinton, they don’t need to look much
further than the Big Lie about philandering that Stephanopoulos, Carville helped to put over in 1992.
cognizer: Democrats
topic: why … Clinton
FrameNet: https://framenet.icsi.berkeley.edu
Frame-Semantic Analysis
When Democrats wonder why there is so much resentment of Clinton, they don’t need to look much
further than the Big Lie about philandering that Stephanopoulos, Carville helped to put over in 1992.
cognizer: Democrats
topic: why … Clinton
explanation: why
degree: so much
content: of Clinton
experiencer: ?
helper: Stephanopoulos … Carville
goal: to put over
time: in 1992
benefited_party: ?
FrameNet: https://framenet.icsi.berkeley.edu
landmark event: Democrats … Clinton
trajector event: they… 1992
entity: so … Clinton
degree: so
mass: resentment of Clinton
time: When … Clinton
required situation: they … to look … 1992
time: When … Clinton
cognizer agent: they
ground: much … 1992
sought entity: ?
topic: about … 1992
trajector event: the Big Lie … over
landmark period: 1992
FrameNet: https://framenet.icsi.berkeley.edu
brood, consider,
contemplate, deliberate, …
appraise,
assess,
evaluate, …
commit to
memory, learn,
memorize, …
agonize, fret, fuss,
lose sleep, …
translate bracket,
categorize,
class, classify
Frame-Semantic Analysis
When Democrats wonder why there is so much resentment of Clinton, they don’t need to look much
further than the Big Lie about philandering that Stephanopoulos, Carville helped to put over in 1992.
cognizer: Democrats
topic: why … Clinton
explanation: why
degree: so much
content: of Clinton
experiencer: ?
helper: Stephanopoulos … Carville
goal: to put over
time: in 1992
benefited_party: ?
FrameNet: https://framenet.icsi.berkeley.edu
landmark event: Democrats … Clinton
trajector event: they… 1992
entity: so … Clinton
degree: so
mass: resentment of Clinton
time: When … Clinton
required situation: they … to look … 1992
time: When … Clinton
cognizer agent: they
ground: much … 1992
sought entity: ?
topic: about … 1992
trajector event: the Big Lie … over
landmark period: 1992
When Democrats wonderCogitation why there is so much resentment of Clinton, they don’t need … words + frame
When Democrats wonderCogitation why there is so much resentment of Clinton, they don’t need …
biLSTM
(contextualized
word vectors)
words + frame
parts: segments
up to length d
scored by
another biLSTM,
with labels
When Democrats wonderCogitation why there is so much resentment of Clinton, they don’t need …
…
biLSTM
(contextualized
word vectors)
words + frame
parts: segments
up to length d
scored by
another biLSTM,
with labels
When Democrats wonderCogitation why there is so much resentment of Clinton, they don’t need …
…
biLSTM
(contextualized
word vectors)
words + frame
output: covering sequence of nonoverlapping segments
Segmental RNN
(Lingpeng Kong, Chris Dyer, N.A.S., ICLR 2016)
biLSTM
(contextualized
word vectors)
input sequence
parts: segments
up to length d
scored by
another biLSTM,
with labels
training objective:
log loss
output: covering sequence of nonoverlapping segments, recovered in O(Ldn); see Sarawagi & Cohen, 2004
…
When Democrats wonderCogitation why there is so much resentment of Clinton, they don’t need …
When Democrats wonderCogitation why there is so much resentment of Clinton, they don’t need …
When Democrats wonderCogitation why there is so much resentment of Clinton, they don’t need …
cognizer topic ∅
∅
∅
When Democrats wonderCogitation why there is so much resentment of Clinton, they don’t need …
cognizer topic ∅
wonderCogitation
∅
wonderCogitation
wonderCogitation
wonderCogitation
∅
When Democrats wonderCogitation why there is so much resentment of Clinton, they don’t need …
cognizer topic ∅
wonderCogitation
∅
wonderCogitation
wonderCogitation
wonderCogitation
∅
When Democrats wonderCogitation why there is so much resentment of Clinton, they don’t need …
cognizer topic ∅
wonderCogitation
∅
wonderCogitation
wonderCogitation
wonderCogitation
∅
When Democrats wonderCogitation why there is so much resentment of Clinton, they don’t need …
cognizer topic ∅
wonderCogitation
∅
wonderCogitation
wonderCogitation
wonderCogitation
∅
Inference via dynamic programming in O(Ldn)
When Democrats wonderCogitation why there is so much resentment of Clinton, they don’t need …
cognizer topic ∅
wonderCogitation
∅
wonderCogitation
wonderCogitation
wonderCogitation
∅
Open-SESAME
(Swabha Swayamdipta, Sam Thomson,
Chris Dyer, N.A.S., arXiv:1706.09528)
biLSTM
(contextualized
word vectors)
words + frame
parts: segments
with role labels,
scored by
another biLSTM
training objective:
recall-oriented
softmax margin
(Gimpel et al.,
2010)
output: labeled
argument spans
When Democrats wonderCogitation why there is so much resentment of Clinton, they don’t need …
cognizer: Democrats
topic: why there is so much resentment of Clinton
…
When Democrats wonderCogitation why there is so much resentment of Clinton, they don’t need …
cognizer topic ∅
wonderCogitation
∅
wonderCogitation
wonderCogitation
wonderCogitation
∅
When Democrats wonderCogitation why there is so much resentment of Clinton, they don’t need …
cognizer topic ∅
wonderCogitation
∅
wonderCogitation
wonderCogitation
wonderCogitation
∅
syntax features?
When Democrats wonderCogitation why there is so much resentment of Clinton, they don’t need …
yes no yes yesno
Penn Treebank (Marcus et al., 1993)
When Democrats wonderCogitation why there is so much resentment of Clinton, they don’t need …
cognizer topic ∅
wonderCogitation
∅
wonderCogitation
wonderCogitation
wonderCogitation
∅
main task
scaffold task
yes no yes yesno
Multitask Representation Learning
(Caruana, 1997)
“gold” output
input representation
input (text)
part representations
output (structure)
training objective
“gold” output
input representation
input (text)
part representations
output (structure)
training objective
main task: find and
label semantic
arguments
scaffold task: predict
syntactic constituents
shared
training datasets need not overlap
output structures need not be consistent
ours
65
66
67
68
69
70
71
72
SEMAFOR 1.0
(Das et al.,
2014)
SEMAFOR 2.0
(Kshirsagar et
al.,2015)
Open-SESAME … with
syntactic
scaffold
… with syntax
features
Framat (Roth,
2016)
FitzGerald et
al. (2015)
single model ensemble
F1 on frame-semantic parsing (frames & arguments), FrameNet 1.5 test set.
open-source systems
ours
65
66
67
68
69
70
71
72
SEMAFOR 1.0
(Das et al.,
2014)
SEMAFOR 2.0
(Kshirsagar et
al.,2015)
Open-SESAME … with
syntactic
scaffold
… with syntax
features
Framat (Roth,
2016)
FitzGerald et
al. (2015)
single model ensemble
F1 on frame-semantic parsing (frames & arguments), FrameNet 1.5 test set.
open-source systems
biLSTM
(contextualized
word vectors)
words + frame
training objective:
recall-oriented
softmax margin
(Gimpel et al.,
2010)
output: labeled
argument spans
When Democrats wonderCogitation why there is so much resentment of Clinton, they don’t need …
cognizer: Democrats
topic: why there is so much resentment of Clinton
…
segments
get scores
Bias?
parts: segments
with role labels,
scored by
another biLSTM
biLSTM
(contextualized
word vectors)
words + frame
training objective:
recall-oriented
softmax margin
(Gimpel et al.,
2010)
output: labeled
argument spans
When Democrats wonderCogitation why there is so much resentment of Clinton, they don’t need …
cognizer: Democrats
topic: why there is so much resentment of Clinton
…
segments
get scores
syntactic scaffold
Bias?
parts: segments
with role labels,
scored by
another biLSTM
biLSTM
(contextualized
word vectors)
words + frame
training objective:
recall-oriented
softmax margin
(Gimpel et al.,
2010)
output: labeled
argument spans
When Democrats wonderCogitation why there is so much resentment of Clinton, they don’t need …
cognizer: Democrats
topic: why there is so much resentment of Clinton
…
segments
get scores
recall-oriented
cost
syntactic scaffold
Bias?
parts: segments
with role labels,
scored by
another biLSTM
Semantic Dependency Graphs
(DELPH-IN minimal recursion semantics-derived representation; “DM”)
Oepen et al. (SemEval 2014; 2015),see also http://sdp.delph-in.net
When Democrats wonderwhy there is so much resentment of Clinton, they don’t need …
Democrats wonder
arg1
tanh(C[hwonder; hDemocrats] + b) · arg1
When Democrats wonderwhy there is so much resentment of Clinton, they don’t need …
When Democrats Democrats wonder why is
is resentment
so much much resentment
resentment of of Clinton
arg2 arg1
arg1
comp_so
arg1
arg1 arg1arg1
When Democrats wonderwhy there is so much resentment of Clinton, they don’t need …
When Democrats
wonder why
Democrats wonder why is
is resentment
so much much resentment
resentment of of Clinton
arg2 arg1
arg1
comp_so
arg1
arg1
arg1 arg1arg1
wonder there
arg1
wonder is
arg1
When Democrats wonderwhy there is so much resentment of Clinton, they don’t need …
Inference via AD3
(alternating directions dual decomposition; Martins et al., 2014)
When Democrats
wonder why
Democrats wonder why is
is resentment
so much much resentment
resentment of of Clinton
arg2 arg1
arg1
comp_so
arg1
arg1
arg1 arg1arg1
wonder there
arg1
wonder is
arg1
Neurboparser
(Hao Peng, Sam Thomson, N.A.S., ACL 2017)
biLSTM
(contextualized
word vectors)
words
parts: labeled
bilexical
dependencies
training objective:
structured hinge
loss
output: labeled
semantic
dependency
graph with
constraints
When Democrats wonderwhy there is so much resentment of Clinton, they don’t need …
…
When Democrats wonderwhy there is so much resentment of Clinton, they don’t need …
Three Formalisms, Three Separate Parsers
Democrats wonder
arg1
formalism 3 (PSD)
formalism 2 (PAS)
formalism 1 (DM)
Democrats wonder
arg1
Democrats wonder
act
When Democrats wonderwhy there is so much resentment of Clinton, they don’t need …
Shared Input Representations
(Daumé, 2007)
Democrats wonder
arg1
shared across all
formalism 3 (PSD)
formalism 2 (PAS)
formalism 1 (DM)
Democrats wonder
arg1
Democrats wonder
act
When Democrats wonderwhy there is so much resentment of Clinton, they don’t need …
Cross-Task Parts
Democrats wonder
arg1
Democrats wonder
arg1
Democrats wonder
act
shared across all
When Democrats wonderwhy there is so much resentment of Clinton, they don’t need …
Both: Shared Input Representations
& Cross-Task Parts
Democrats wonder
arg1
shared across all
formalism 3 (PSD)
formalism 2 (PAS)
formalism 1 (DM)
Democrats wonder
arg1
Democrats wonder
act
Multitask Learning: Many Possibilities
• Shared input representations, parts? Which parts?
• Joint decoding?
• Overlapping training data?
• Scaffold tasks?
“gold” output “gold” output“gold” output
formalism 1 (DM) formalism 2 (PAS) formalism 3 (PSD)
ours
80
81
82
83
84
85
86
87
88
Du et al.,2015 Almeida &
Martins, 2015
(no syntax)
Almeida &
Martins, 2015
(syntax)
Neurboparser … with shared
input
representations
and cross-task
parts
WSJ (in-domain) Brown (out-of-domain)
F1 averaged on three semantic dependency parsing formalisms, SemEval 2015 test set.
50
60
70
80
90
100
DM PAS PSD
WSJ (in-domain) Brown (out-of-domain)
Neurboparser F1 on three semantic dependency parsing formalisms, SemEval 2015 test set.
good enough?
biLSTM
(contextualized
word vectors)
words
training objective:
structured hinge
loss
output: labeled
semantic
dependency
graph with
constraints
When Democrats wonderwhy there is so much resentment of Clinton, they don’t need …
…
Bias? cross-formalism
sharing
parts: labeled
bilexical
dependencies
Text
Text ≠ Sentences
larger context
Generative Language Models
history next word
p(W | history)
Generative Language Models
history next word
p(W | history)
When Democrats wonderwhy there is so much resentment of
Entity Language Model
(Yangfeng Ji, Chenhao Tan, Sebastian Martschat,
Yejin Choi, N.A.S., EMNLP 2017)
Entity Language Model
(Yangfeng Ji, Chenhao Tan, Sebastian Martschat,
Yejin Choi, N.A.S., EMNLP 2017)
When Democrats wonderwhy there is so much resentment of
entity 1
Entity Language Model
(Yangfeng Ji, Chenhao Tan, Sebastian Martschat,
Yejin Choi, N.A.S., EMNLP 2017)
When Democrats wonderwhy there is so much resentment of Clinton,
1. new entity with new
vector
2. mention word will be
“Clinton”
entity 1
entity 2
Entity Language Model
(Yangfeng Ji, Chenhao Tan, Sebastian Martschat,
Yejin Choi, N.A.S., EMNLP 2017)
When Democrats wonderwhy there is so much resentment of Clinton,
entity 1
entity 2
Entity Language Model
(Yangfeng Ji, Chenhao Tan, Sebastian Martschat,
Yejin Choi, N.A.S., EMNLP 2017)
When Democrats wonderwhy there is so much resentment of Clinton, they
1. coreferent of entity 1
(previously known as
“Democrats”)
2. mention word will be
“they”
3. embedding of entity 1
will be updated
entity 1
entity 2
Entity Language Model
(Yangfeng Ji, Chenhao Tan, Sebastian Martschat,
Yejin Choi, N.A.S., EMNLP 2017)
When Democrats wonderwhy there is so much resentment of Clinton, they
entity 1
entity 2
Entity Language Model
(Yangfeng Ji, Chenhao Tan, Sebastian Martschat,
Yejin Choi, N.A.S., EMNLP 2017)
When Democrats wonderwhy there is so much resentment of Clinton, they don’t
1. not part of an entity
mention
2. “don’t”
entity 1
entity 2
128
132
136
140
5-gram LM RNN LM entity
language
model
Perplexity on CoNLL 2012 test set. CoNLL 2012 coreference evaluation.
50
55
60
65
70
75
Martschat and
Strube (2015)
reranked with entity
LM
CoNLL MUC F1 B3 CEAF
25
35
45
55
65
75
always
new
shallow
features
Modi et
al. (2017)
entity LM human
Accuracy: InScript.
history next word
p(W | history)
entities
Bias?
Bias in the Future?
• Linguistic scaffold tasks.
Bias in the Future?
• Linguistic scaffold tasks.
• Language is by and about people.
Bias in the Future?
• Linguistic scaffold tasks.
• Language is by and about people.
• NLP is needed when texts are costly to read.
Bias in the Future?
• Linguistic scaffold tasks.
• Language is by and about people.
• NLP is needed when texts are costly to read.
• Polyglot learning.
data
bias
Thank you!

Noah A Smith - 2017 - Invited Keynote: Squashing Computational Linguistics

  • 1.
    Squashing Computational Linguistics Noah A.Smith Paul G. Allen School of Computer Science & Engineering University of Washington Seattle, USA @nlpnoah Research supported in part by: NSF, DARPA DEFT, DARPA CWC, Facebook, Google, Samsung, University of Washington.
  • 2.
  • 3.
    Applications of NLPin 2017 • Conversation, IE, MT, QA, summarization, text categorization
  • 4.
    Applications of NLPin 2017 • Conversation, IE, MT, QA, summarization, text categorization • Machine-in-the-loop tools for (human) authors Elizabeth Clark Collaborate with an NLP model through an “exquisite corpse” storytelling game Chenhao Tan Revise your message with help from NLP tremoloop.com
  • 5.
    Applications of NLPin 2017 • Conversation, IE, MT, QA, summarization, text categorization • Machine-in-the-loop tools for (human) authors • Analysis tools for measuring social phenomena Lucy Lin Sensationalism in science news bit.ly/sensational-news … bookmark this survey! Dallas Card Track ideas, propositions, frames in discourse over time
  • 6.
  • 7.
  • 8.
    Squash Networks • Parameterizeddifferentiable functions composed out of simpler parameterized differentiable functions, some nonlinear
  • 9.
    Squash Networks • Parameterizeddifferentiable functions composed out of simpler parameterized differentiable functions, some nonlinear From Jack (2010),Dynamic System Modeling and Control, goo.gl/pGvJPS *Yes, rectified linear units (relus) are only half-squash; hat-tip Martha White.
  • 10.
    Squash Networks • Parameterizeddifferentiable functions composed out of simpler parameterized differentiable functions, some nonlinear • Estimate parameters using Leibniz (1676) From existentialcomics.com
  • 11.
    Who wants anall-squash diet? wow very Curcurbita much festive many dropout
  • 13.
    Linguistic Structure Prediction input(text) output (structure)
  • 14.
    Linguistic Structure Prediction input(text) output (structure) sequences, trees, graphs, …
  • 15.
    “gold” output Linguistic StructurePrediction input (text) output (structure) sequences, trees, graphs, …
  • 16.
    “gold” output Linguistic StructurePrediction input representation input (text) output (structure) sequences, trees, graphs, …
  • 17.
    “gold” output Linguistic StructurePrediction input representation input (text) output (structure) clusters,lexicons, embeddings, … sequences, trees, graphs, …
  • 18.
    “gold” output Linguistic StructurePrediction input representation input (text) output (structure) training objective clusters,lexicons, embeddings, … sequences, trees, graphs, …
  • 19.
    “gold” output Linguistic StructurePrediction input representation input (text) output (structure) training objective clusters,lexicons, embeddings, … sequences, trees, graphs, … probabilistic, cost-aware,…
  • 20.
    “gold” output Linguistic StructurePrediction input representation input (text) part representations output (structure) training objective clusters,lexicons, embeddings, … sequences, trees, graphs, … probabilistic, cost-aware,…
  • 21.
    “gold” output Linguistic StructurePrediction input representation input (text) part representations output (structure) training objective clusters,lexicons, embeddings, … segments/spans, arcs, graph fragments, … sequences, trees, graphs, … probabilistic, cost-aware,…
  • 22.
    “gold” output Linguistic StructurePrediction input representation input (text) part representations output (structure) training objective
  • 23.
    “gold” output Linguistic StructurePrediction input representation input (text) part representations output (structure) training objective error definitions & weights regularization annotation conventions & theory constraints & independence assumptions dataselection “task”
  • 24.
    Inductive Bias • Whatdoes your learning algorithm assume? • How will it choose among good predictive functions? See also: No Free Lunch Theorem (Mitchell, 1980;Wolpert, 1996)
  • 25.
  • 26.
    Three New Models •Parsing sentences into predicate-argument structures • Fillmore frames • Semantic dependency graphs • Language models that dynamically track entities
  • 27.
    When Democrats wonderwhy there is so much resentment of Clinton, they don’t need to look much further than the Big Lie about philandering that Stephanopoulos, Carville helped to put over in 1992. Original story on Slate.com: http://goo.gl/Hp89tD
  • 28.
    Frame-Semantic Analysis When Democratswonder why there is so much resentment of Clinton, they don’t need to look much further than the Big Lie about philandering that Stephanopoulos, Carville helped to put over in 1992. FrameNet: https://framenet.icsi.berkeley.edu
  • 29.
    Frame-Semantic Analysis When Democratswonder why there is so much resentment of Clinton, they don’t need to look much further than the Big Lie about philandering that Stephanopoulos, Carville helped to put over in 1992. FrameNet: https://framenet.icsi.berkeley.edu
  • 30.
    Frame-Semantic Analysis When Democratswonder why there is so much resentment of Clinton, they don’t need to look much further than the Big Lie about philandering that Stephanopoulos, Carville helped to put over in 1992. cognizer: Democrats topic: why … Clinton FrameNet: https://framenet.icsi.berkeley.edu
  • 31.
    Frame-Semantic Analysis When Democratswonder why there is so much resentment of Clinton, they don’t need to look much further than the Big Lie about philandering that Stephanopoulos, Carville helped to put over in 1992. cognizer: Democrats topic: why … Clinton explanation: why degree: so much content: of Clinton experiencer: ? helper: Stephanopoulos … Carville goal: to put over time: in 1992 benefited_party: ? FrameNet: https://framenet.icsi.berkeley.edu landmark event: Democrats … Clinton trajector event: they… 1992 entity: so … Clinton degree: so mass: resentment of Clinton time: When … Clinton required situation: they … to look … 1992 time: When … Clinton cognizer agent: they ground: much … 1992 sought entity: ? topic: about … 1992 trajector event: the Big Lie … over landmark period: 1992
  • 32.
    FrameNet: https://framenet.icsi.berkeley.edu brood, consider, contemplate,deliberate, … appraise, assess, evaluate, … commit to memory, learn, memorize, … agonize, fret, fuss, lose sleep, … translate bracket, categorize, class, classify
  • 33.
    Frame-Semantic Analysis When Democratswonder why there is so much resentment of Clinton, they don’t need to look much further than the Big Lie about philandering that Stephanopoulos, Carville helped to put over in 1992. cognizer: Democrats topic: why … Clinton explanation: why degree: so much content: of Clinton experiencer: ? helper: Stephanopoulos … Carville goal: to put over time: in 1992 benefited_party: ? FrameNet: https://framenet.icsi.berkeley.edu landmark event: Democrats … Clinton trajector event: they… 1992 entity: so … Clinton degree: so mass: resentment of Clinton time: When … Clinton required situation: they … to look … 1992 time: When … Clinton cognizer agent: they ground: much … 1992 sought entity: ? topic: about … 1992 trajector event: the Big Lie … over landmark period: 1992
  • 34.
    When Democrats wonderCogitationwhy there is so much resentment of Clinton, they don’t need … words + frame
  • 35.
    When Democrats wonderCogitationwhy there is so much resentment of Clinton, they don’t need … biLSTM (contextualized word vectors) words + frame
  • 36.
    parts: segments up tolength d scored by another biLSTM, with labels When Democrats wonderCogitation why there is so much resentment of Clinton, they don’t need … … biLSTM (contextualized word vectors) words + frame
  • 37.
    parts: segments up tolength d scored by another biLSTM, with labels When Democrats wonderCogitation why there is so much resentment of Clinton, they don’t need … … biLSTM (contextualized word vectors) words + frame output: covering sequence of nonoverlapping segments
  • 38.
    Segmental RNN (Lingpeng Kong,Chris Dyer, N.A.S., ICLR 2016) biLSTM (contextualized word vectors) input sequence parts: segments up to length d scored by another biLSTM, with labels training objective: log loss output: covering sequence of nonoverlapping segments, recovered in O(Ldn); see Sarawagi & Cohen, 2004 …
  • 39.
    When Democrats wonderCogitationwhy there is so much resentment of Clinton, they don’t need …
  • 40.
    When Democrats wonderCogitationwhy there is so much resentment of Clinton, they don’t need …
  • 41.
    When Democrats wonderCogitationwhy there is so much resentment of Clinton, they don’t need … cognizer topic ∅ ∅ ∅
  • 42.
    When Democrats wonderCogitationwhy there is so much resentment of Clinton, they don’t need … cognizer topic ∅ wonderCogitation ∅ wonderCogitation wonderCogitation wonderCogitation ∅
  • 43.
    When Democrats wonderCogitationwhy there is so much resentment of Clinton, they don’t need … cognizer topic ∅ wonderCogitation ∅ wonderCogitation wonderCogitation wonderCogitation ∅
  • 44.
    When Democrats wonderCogitationwhy there is so much resentment of Clinton, they don’t need … cognizer topic ∅ wonderCogitation ∅ wonderCogitation wonderCogitation wonderCogitation ∅
  • 45.
    When Democrats wonderCogitationwhy there is so much resentment of Clinton, they don’t need … cognizer topic ∅ wonderCogitation ∅ wonderCogitation wonderCogitation wonderCogitation ∅
  • 46.
    Inference via dynamicprogramming in O(Ldn) When Democrats wonderCogitation why there is so much resentment of Clinton, they don’t need … cognizer topic ∅ wonderCogitation ∅ wonderCogitation wonderCogitation wonderCogitation ∅
  • 47.
    Open-SESAME (Swabha Swayamdipta, SamThomson, Chris Dyer, N.A.S., arXiv:1706.09528) biLSTM (contextualized word vectors) words + frame parts: segments with role labels, scored by another biLSTM training objective: recall-oriented softmax margin (Gimpel et al., 2010) output: labeled argument spans When Democrats wonderCogitation why there is so much resentment of Clinton, they don’t need … cognizer: Democrats topic: why there is so much resentment of Clinton …
  • 48.
    When Democrats wonderCogitationwhy there is so much resentment of Clinton, they don’t need … cognizer topic ∅ wonderCogitation ∅ wonderCogitation wonderCogitation wonderCogitation ∅
  • 49.
    When Democrats wonderCogitationwhy there is so much resentment of Clinton, they don’t need … cognizer topic ∅ wonderCogitation ∅ wonderCogitation wonderCogitation wonderCogitation ∅ syntax features?
  • 50.
    When Democrats wonderCogitationwhy there is so much resentment of Clinton, they don’t need … yes no yes yesno Penn Treebank (Marcus et al., 1993)
  • 51.
    When Democrats wonderCogitationwhy there is so much resentment of Clinton, they don’t need … cognizer topic ∅ wonderCogitation ∅ wonderCogitation wonderCogitation wonderCogitation ∅ main task scaffold task yes no yes yesno
  • 52.
    Multitask Representation Learning (Caruana,1997) “gold” output input representation input (text) part representations output (structure) training objective “gold” output input representation input (text) part representations output (structure) training objective main task: find and label semantic arguments scaffold task: predict syntactic constituents shared training datasets need not overlap output structures need not be consistent
  • 53.
    ours 65 66 67 68 69 70 71 72 SEMAFOR 1.0 (Das etal., 2014) SEMAFOR 2.0 (Kshirsagar et al.,2015) Open-SESAME … with syntactic scaffold … with syntax features Framat (Roth, 2016) FitzGerald et al. (2015) single model ensemble F1 on frame-semantic parsing (frames & arguments), FrameNet 1.5 test set. open-source systems
  • 54.
    ours 65 66 67 68 69 70 71 72 SEMAFOR 1.0 (Das etal., 2014) SEMAFOR 2.0 (Kshirsagar et al.,2015) Open-SESAME … with syntactic scaffold … with syntax features Framat (Roth, 2016) FitzGerald et al. (2015) single model ensemble F1 on frame-semantic parsing (frames & arguments), FrameNet 1.5 test set. open-source systems
  • 55.
    biLSTM (contextualized word vectors) words +frame training objective: recall-oriented softmax margin (Gimpel et al., 2010) output: labeled argument spans When Democrats wonderCogitation why there is so much resentment of Clinton, they don’t need … cognizer: Democrats topic: why there is so much resentment of Clinton … segments get scores Bias? parts: segments with role labels, scored by another biLSTM
  • 56.
    biLSTM (contextualized word vectors) words +frame training objective: recall-oriented softmax margin (Gimpel et al., 2010) output: labeled argument spans When Democrats wonderCogitation why there is so much resentment of Clinton, they don’t need … cognizer: Democrats topic: why there is so much resentment of Clinton … segments get scores syntactic scaffold Bias? parts: segments with role labels, scored by another biLSTM
  • 57.
    biLSTM (contextualized word vectors) words +frame training objective: recall-oriented softmax margin (Gimpel et al., 2010) output: labeled argument spans When Democrats wonderCogitation why there is so much resentment of Clinton, they don’t need … cognizer: Democrats topic: why there is so much resentment of Clinton … segments get scores recall-oriented cost syntactic scaffold Bias? parts: segments with role labels, scored by another biLSTM
  • 58.
    Semantic Dependency Graphs (DELPH-INminimal recursion semantics-derived representation; “DM”) Oepen et al. (SemEval 2014; 2015),see also http://sdp.delph-in.net
  • 59.
    When Democrats wonderwhythere is so much resentment of Clinton, they don’t need … Democrats wonder arg1 tanh(C[hwonder; hDemocrats] + b) · arg1
  • 60.
    When Democrats wonderwhythere is so much resentment of Clinton, they don’t need … When Democrats Democrats wonder why is is resentment so much much resentment resentment of of Clinton arg2 arg1 arg1 comp_so arg1 arg1 arg1arg1
  • 61.
    When Democrats wonderwhythere is so much resentment of Clinton, they don’t need … When Democrats wonder why Democrats wonder why is is resentment so much much resentment resentment of of Clinton arg2 arg1 arg1 comp_so arg1 arg1 arg1 arg1arg1 wonder there arg1 wonder is arg1
  • 62.
    When Democrats wonderwhythere is so much resentment of Clinton, they don’t need … Inference via AD3 (alternating directions dual decomposition; Martins et al., 2014) When Democrats wonder why Democrats wonder why is is resentment so much much resentment resentment of of Clinton arg2 arg1 arg1 comp_so arg1 arg1 arg1 arg1arg1 wonder there arg1 wonder is arg1
  • 63.
    Neurboparser (Hao Peng, SamThomson, N.A.S., ACL 2017) biLSTM (contextualized word vectors) words parts: labeled bilexical dependencies training objective: structured hinge loss output: labeled semantic dependency graph with constraints When Democrats wonderwhy there is so much resentment of Clinton, they don’t need … …
  • 64.
    When Democrats wonderwhythere is so much resentment of Clinton, they don’t need … Three Formalisms, Three Separate Parsers Democrats wonder arg1 formalism 3 (PSD) formalism 2 (PAS) formalism 1 (DM) Democrats wonder arg1 Democrats wonder act
  • 65.
    When Democrats wonderwhythere is so much resentment of Clinton, they don’t need … Shared Input Representations (Daumé, 2007) Democrats wonder arg1 shared across all formalism 3 (PSD) formalism 2 (PAS) formalism 1 (DM) Democrats wonder arg1 Democrats wonder act
  • 66.
    When Democrats wonderwhythere is so much resentment of Clinton, they don’t need … Cross-Task Parts Democrats wonder arg1 Democrats wonder arg1 Democrats wonder act shared across all
  • 67.
    When Democrats wonderwhythere is so much resentment of Clinton, they don’t need … Both: Shared Input Representations & Cross-Task Parts Democrats wonder arg1 shared across all formalism 3 (PSD) formalism 2 (PAS) formalism 1 (DM) Democrats wonder arg1 Democrats wonder act
  • 68.
    Multitask Learning: ManyPossibilities • Shared input representations, parts? Which parts? • Joint decoding? • Overlapping training data? • Scaffold tasks? “gold” output “gold” output“gold” output formalism 1 (DM) formalism 2 (PAS) formalism 3 (PSD)
  • 69.
    ours 80 81 82 83 84 85 86 87 88 Du et al.,2015Almeida & Martins, 2015 (no syntax) Almeida & Martins, 2015 (syntax) Neurboparser … with shared input representations and cross-task parts WSJ (in-domain) Brown (out-of-domain) F1 averaged on three semantic dependency parsing formalisms, SemEval 2015 test set.
  • 70.
    50 60 70 80 90 100 DM PAS PSD WSJ(in-domain) Brown (out-of-domain) Neurboparser F1 on three semantic dependency parsing formalisms, SemEval 2015 test set. good enough?
  • 71.
    biLSTM (contextualized word vectors) words training objective: structuredhinge loss output: labeled semantic dependency graph with constraints When Democrats wonderwhy there is so much resentment of Clinton, they don’t need … … Bias? cross-formalism sharing parts: labeled bilexical dependencies
  • 72.
  • 73.
  • 74.
    Generative Language Models historynext word p(W | history)
  • 75.
    Generative Language Models historynext word p(W | history)
  • 76.
    When Democrats wonderwhythere is so much resentment of Entity Language Model (Yangfeng Ji, Chenhao Tan, Sebastian Martschat, Yejin Choi, N.A.S., EMNLP 2017)
  • 77.
    Entity Language Model (YangfengJi, Chenhao Tan, Sebastian Martschat, Yejin Choi, N.A.S., EMNLP 2017) When Democrats wonderwhy there is so much resentment of entity 1
  • 78.
    Entity Language Model (YangfengJi, Chenhao Tan, Sebastian Martschat, Yejin Choi, N.A.S., EMNLP 2017) When Democrats wonderwhy there is so much resentment of Clinton, 1. new entity with new vector 2. mention word will be “Clinton” entity 1 entity 2
  • 79.
    Entity Language Model (YangfengJi, Chenhao Tan, Sebastian Martschat, Yejin Choi, N.A.S., EMNLP 2017) When Democrats wonderwhy there is so much resentment of Clinton, entity 1 entity 2
  • 80.
    Entity Language Model (YangfengJi, Chenhao Tan, Sebastian Martschat, Yejin Choi, N.A.S., EMNLP 2017) When Democrats wonderwhy there is so much resentment of Clinton, they 1. coreferent of entity 1 (previously known as “Democrats”) 2. mention word will be “they” 3. embedding of entity 1 will be updated entity 1 entity 2
  • 81.
    Entity Language Model (YangfengJi, Chenhao Tan, Sebastian Martschat, Yejin Choi, N.A.S., EMNLP 2017) When Democrats wonderwhy there is so much resentment of Clinton, they entity 1 entity 2
  • 82.
    Entity Language Model (YangfengJi, Chenhao Tan, Sebastian Martschat, Yejin Choi, N.A.S., EMNLP 2017) When Democrats wonderwhy there is so much resentment of Clinton, they don’t 1. not part of an entity mention 2. “don’t” entity 1 entity 2
  • 83.
    128 132 136 140 5-gram LM RNNLM entity language model Perplexity on CoNLL 2012 test set. CoNLL 2012 coreference evaluation. 50 55 60 65 70 75 Martschat and Strube (2015) reranked with entity LM CoNLL MUC F1 B3 CEAF 25 35 45 55 65 75 always new shallow features Modi et al. (2017) entity LM human Accuracy: InScript.
  • 84.
    history next word p(W| history) entities Bias?
  • 85.
    Bias in theFuture? • Linguistic scaffold tasks.
  • 86.
    Bias in theFuture? • Linguistic scaffold tasks. • Language is by and about people.
  • 87.
    Bias in theFuture? • Linguistic scaffold tasks. • Language is by and about people. • NLP is needed when texts are costly to read.
  • 88.
    Bias in theFuture? • Linguistic scaffold tasks. • Language is by and about people. • NLP is needed when texts are costly to read. • Polyglot learning.
  • 89.
  • 90.