SlideShare a Scribd company logo
Lexically Constrained
Decoding for Sequence
Generation Using Grid Beam
Search
Chris Hokamp, Qun Liu
(ACL 2017)
B4 勝又 智
Abstract
● They present Grid Beam Search (GBS), an algorithm which extends beam
search to allow the inclusion of pre-specified lexical constraints.
● They demonstrate the feasibility and flexibility of Lexically Constrained
Decoding by conducting 2 experiments.
○ Neural Interactive-Predictive Translation
○ Domain Adaptation for Neural Machine Translation
● Result
○ GBS can provide large improvements in translation quality in interactive scenarios.
○ even without any user input, GBS can be used to achieve significant gains in performance
in domain adaptation scenarios.
2
Introduction: What is this research?
● The output of many natural language processing model is a sequence of text.
→ Especially, they focus on machine translation usecases.†
○ such as Post-Editing (PE) and Interactive-Predictive MT
● Their goal is to find a way to force the output of a model to contain such lexical
constraints.
● In this paper, they propose a decoding algorithm which allows the specification
of subsequences to be present in a model’s output.
3
input model
output
(sequenc
e)
†: lexically constrained decoding is relevant to any
scenario where a model is asked to generate a
secuence given an input.
(image description, dialog generation, abstractive
summarization, question answering)
Background: Beam Search
4algorithm: Kyoto University Participation to WAT 2016. Cromieres et al., WAT2016
Grid Beam Search: Abstract
● their goal is to organize decoding in such a way that they can constrain the
search space to outputs which contain one or more pre-specified
sub-sequences.
○ want to place lexical constraints correctly and
to generate the parts of the output which are not covered by the constraints.
5time step (t)
Constraint
Coverage
(c)
constraints
is an array of sequences.
← constraints
= [[Ihre, Rechte, müssen],
[Abreise, geschützt]]
more detail on the next slides →
Grid Beam Search: Algorithm
6
numC: total number of tokens in all
constaints
k: beam window size
(this pseudo-code is for any model)
1. open hypotheses can either
generate from the model’s
distribution or start constraints
2. closed hypotheses can only
generate the next token for in a
currently unfinished constraint
1. open hypotheses (Grid[t-1][c]) may
generate continuations from the
model’s distribution
2. open hypotheses (Grid[t-1][c-1])
may start new constraints
3. closed hypotheses (Grid[t-1][c-1])
may continue constraints
Grid Beam Search: Deteil
7
time step (t)
Constraint
Coverage
(c)
a colored strip inside a beam box
represents an individual hypothesis
in the beam’s k-best stack.
Hypotheses with circles are closed,
all other hypotheses are open.
← conceptual drawing
Grid Beam Search: Multi-token Constraints
● By distinguishing between open and closed hypotheses,
→ allow for arbitrary multi-token phrases in the search.
● Each hypothesis maintains a coverage vector.
○ ensure that constraints cannot be repeated in a search path
(hypotheses which have already covered constrainti can only generate or start new constraints)
● Discontinuous lexical constraints such as phrasal verbs in En or Ge
○ it is easy to apply GBS by adding filters to the search ↓
8
constraint
ask <someone> out
constraint1 ask
constraint2 out
constraint1 cannot be
used before constraint0
there must be at least one
generated token between
the constraints.
use 2 filters
Grid Beam Search: Addressing unknown words
● their decoder can handle arbitrary constraints.
→ there is a risk that constraints will contain unknown tokens.
○ Especially in domain adaptation scenarios, some user-specified constraints are very likely to
contain unseen tokens.
→ Use Subword Units. (maybe Byte Pair Encoding...)
(In the experiments, they use this technique to ensure that no input tokens are unknown, even if a constraint contains
words which never appeared in the training data.)
9
Grid Beam Search: Effciency
● the runtime complexity of a naive implementation of GBS is O(ktc).
(standard time-based beam search is O(tk))
↓ some consideration must be given to the efficiency of this algorithm
● the beams in each column c of this Figure are independent.
→ GBS can be parallelized to allow all beams at each timestep to be filled
simultaneously.
● they find that the most time is spent
computing the states for the hyp candidates.
→ by keeping the beam size small,
they can make GBS significantly faster.
10
Experiments setup (including appendix)
● The models are SOTA NMT systems using our own implementation of NMT
with attention over the source sequence (Bahdnau et al., 2014).
○ bidirectional GRUs
○ AdaDelta
○ train all systems for 500,000 iterations, with validation every 5,000 steps,
best single model from validation is used.
○ use ℓ2 regularization (α = 1e^(-5))
○ Dropout is used on the output layers, rate is 0.5.
○ beam size is 10.
● corpus
○ English-German
■ 4.4M segments from the Europarl and CommonCrawl
○ English-French
■ 4.9M segments from the Europarl and CommonCrawl
○ English-Portuguese
■ 28.5M segments from the Europarl, JRC-Aquis and OpenSubtitles
● subword
○ they created a shared subword representation for each language pair
by extracting a vocabulary of 80,000 symbols from the concatenated source and target data. 11
Experiments: Pick-Revise for Interactive PE
● Pick-Revise is an interaction cycle for MT Post-Editing.
● they modify this experiments.
○ the user only provides sequences of up to three words
which are missing from the hypothesis from reference.
○ in this experiments, four cycles.
○ strict setting:
the complete phrase must be missing from the hypothesis.
○ relaxed setting:
only the first word must be missing.
○ when a 3-token phrase cannot be found,
they backoff to 2-token phrases, then to single tokens as constraints.
○ If a hypothesis already matches the reference, no constraints are added.
12fig1: PRIMT: A Pick-Revise Framework for Interactive Machine Translation. Cheng et al., NAACL HLT 2016
Result: Pick-Revise for Interactive PE
13
Experiments: Domain Adaptation via Terminology
● The requirement for use of domain-specific terminologies is common in real
world applications of MT.
→ provide the term as constraints.
● target domain data
○ Autodesk Post-Editing corpus
↑ focused upon sofware localization, very dfferent from train data.
● Extract Terminology
○ devide the corpus into approximately 100,000 training sents, 1,000 test segments.
automatically generate a terminology by computing PMI between source and target n-grams in
training set.
○ extract all n-grams from length 2-5 as terminology
candidates.
○ filter the candidates to only include pairs
whose PMI is >= 0.9 and where both src and tgt occur in at least 5 times.
→ make the set of the terminology.
14
normarized to
[-1, +1]
Results: Domain Adaptation via Terminology
● confirming that this improvements is caused
by the placement of the terms by GBS,
they evaluate the other inserting methods.
○ randomly inserting terms to the baseline output
○ prepending terms to the baseline output
● even an automatically created terminology
yield improvements.
○ manually created domain terminologies
are likely to lead to greater gains.
15
test set
src tgt
terminology
phrase
corrsponding
phrase
constraints
for that segmentsadd
↑ conceptual drawing for previous slide
Output Analysis
16
phrases added as constraints have global effects upon
trainslation quolity.
related work
● Most related work has presented modifications of SMT systems for specific
usecases. (focusing on prefix and sufix)
○ The largest body of work considers Interactive Machine Translation (IMT)
● some attention has also been given to SMT decoding with multiple lexical
constraints. (Cheng et al.,2016)
○ their approach relies upon the phrase segmentation provided by the SMT system.
○ the decoding algorithm can only make use of constraints that match phrase boundaries.
In contrast, GBS approach decodes at the token level.
● “To the best of our knowledge, ours is the first work which considers
general lexically constrained decoding for any model which outputs
sequences, without relying upon alignments between input and output, and
without using a search organized by coverage of the input.”
17
Conclusion
● Lexically constrained decoding is a flexible way to incorporate arbitrary
subsequences into the output of any model that generates output sequences
token-by-token.
● in interctive translation, user inputs can be used as constraints,
generating a new output each time a user fixes an error.
○ such a workflow can provide a large improvement in translation quality.
● using a domain-specific terminology to generate target-side constraints
○ a general domain model can be adapted to a new domain without anay retraining
● future work
○ evaluate GBS with models outside of MT, such as automatic summarization, image captioning
or dialog generation
○ introduce new constraint-aware models (via secondary attention mecanisms over lexical
constraints) 18
reference
● their implementation of GBS
○ https://github.com/chrishokamp/constrained_decoding
● Kyoto University Participation to WAT 2016. Cromieres et al., WAT2016
● PRIMT: A Pick-Revise Framework for Interactive Machine Translation. Cheng
et al., NAACL HLT 2016
19

More Related Content

Similar to Lexically constrained decoding for sequence generation using grid beam search

Conv-TasNet.pdf
Conv-TasNet.pdfConv-TasNet.pdf
Conv-TasNet.pdf
ssuser849b73
 
論文輪読資料「Gated Feedback Recurrent Neural Networks」
論文輪読資料「Gated Feedback Recurrent Neural Networks」論文輪読資料「Gated Feedback Recurrent Neural Networks」
論文輪読資料「Gated Feedback Recurrent Neural Networks」
kurotaki_weblab
 
Speech Separation under Reverberant Condition.pdf
Speech Separation under Reverberant Condition.pdfSpeech Separation under Reverberant Condition.pdf
Speech Separation under Reverberant Condition.pdf
ssuser849b73
 
Conformer review
Conformer reviewConformer review
Conformer review
June-Woo Kim
 
Recurrent Instance Segmentation (UPC Reading Group)
Recurrent Instance Segmentation (UPC Reading Group)Recurrent Instance Segmentation (UPC Reading Group)
Recurrent Instance Segmentation (UPC Reading Group)
Universitat Politècnica de Catalunya
 
A minimization approach for two level logic synthesis using constrained depth...
A minimization approach for two level logic synthesis using constrained depth...A minimization approach for two level logic synthesis using constrained depth...
A minimization approach for two level logic synthesis using constrained depth...
IAEME Publication
 
Connectionist Temporal Classification
Connectionist Temporal ClassificationConnectionist Temporal Classification
Connectionist Temporal Classification
Julius Hietala
 
M ESH S IMPLIFICATION V IA A V OLUME C OST M EASURE
M ESH S IMPLIFICATION V IA A V OLUME C OST M EASUREM ESH S IMPLIFICATION V IA A V OLUME C OST M EASURE
M ESH S IMPLIFICATION V IA A V OLUME C OST M EASURE
ijcga
 
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
 
LLM GPT-3: Language models are few-shot learners
LLM GPT-3: Language models are few-shot learnersLLM GPT-3: Language models are few-shot learners
LLM GPT-3: Language models are few-shot learners
Yan Xu
 
Driving Moore's Law with Python-Powered Machine Learning: An Insider's Perspe...
Driving Moore's Law with Python-Powered Machine Learning: An Insider's Perspe...Driving Moore's Law with Python-Powered Machine Learning: An Insider's Perspe...
Driving Moore's Law with Python-Powered Machine Learning: An Insider's Perspe...
PyData
 
Once-for-All: Train One Network and Specialize it for Efficient Deployment
 Once-for-All: Train One Network and Specialize it for Efficient Deployment Once-for-All: Train One Network and Specialize it for Efficient Deployment
Once-for-All: Train One Network and Specialize it for Efficient Deployment
taeseon ryu
 
Transformer Seq2Sqe Models: Concepts, Trends & Limitations (DLI)
Transformer Seq2Sqe Models: Concepts, Trends & Limitations (DLI)Transformer Seq2Sqe Models: Concepts, Trends & Limitations (DLI)
Transformer Seq2Sqe Models: Concepts, Trends & Limitations (DLI)
Deep Learning Italia
 
Comprehensive Performance Evaluation on Multiplication of Matrices using MPI
Comprehensive Performance Evaluation on Multiplication of Matrices using MPIComprehensive Performance Evaluation on Multiplication of Matrices using MPI
Comprehensive Performance Evaluation on Multiplication of Matrices using MPI
ijtsrd
 
Bh36352357
Bh36352357Bh36352357
Bh36352357
IJERA Editor
 
Parallelization of Coupled Cluster Code with OpenMP
Parallelization of Coupled Cluster Code with OpenMPParallelization of Coupled Cluster Code with OpenMP
Parallelization of Coupled Cluster Code with OpenMP
Anil Bohare
 

Similar to Lexically constrained decoding for sequence generation using grid beam search (20)

Conv-TasNet.pdf
Conv-TasNet.pdfConv-TasNet.pdf
Conv-TasNet.pdf
 
論文輪読資料「Gated Feedback Recurrent Neural Networks」
論文輪読資料「Gated Feedback Recurrent Neural Networks」論文輪読資料「Gated Feedback Recurrent Neural Networks」
論文輪読資料「Gated Feedback Recurrent Neural Networks」
 
Speech Separation under Reverberant Condition.pdf
Speech Separation under Reverberant Condition.pdfSpeech Separation under Reverberant Condition.pdf
Speech Separation under Reverberant Condition.pdf
 
Conformer review
Conformer reviewConformer review
Conformer review
 
Recurrent Instance Segmentation (UPC Reading Group)
Recurrent Instance Segmentation (UPC Reading Group)Recurrent Instance Segmentation (UPC Reading Group)
Recurrent Instance Segmentation (UPC Reading Group)
 
A minimization approach for two level logic synthesis using constrained depth...
A minimization approach for two level logic synthesis using constrained depth...A minimization approach for two level logic synthesis using constrained depth...
A minimization approach for two level logic synthesis using constrained depth...
 
Connectionist Temporal Classification
Connectionist Temporal ClassificationConnectionist Temporal Classification
Connectionist Temporal Classification
 
M ESH S IMPLIFICATION V IA A V OLUME C OST M EASURE
M ESH S IMPLIFICATION V IA A V OLUME C OST M EASUREM ESH S IMPLIFICATION V IA A V OLUME C OST M EASURE
M ESH S IMPLIFICATION V IA A V OLUME C OST M EASURE
 
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...IJCER (www.ijceronline.com) International Journal of computational Engineerin...
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
 
Thesis Giani UIC Slides EN
Thesis Giani UIC Slides ENThesis Giani UIC Slides EN
Thesis Giani UIC Slides EN
 
LLM GPT-3: Language models are few-shot learners
LLM GPT-3: Language models are few-shot learnersLLM GPT-3: Language models are few-shot learners
LLM GPT-3: Language models are few-shot learners
 
L046056365
L046056365L046056365
L046056365
 
Driving Moore's Law with Python-Powered Machine Learning: An Insider's Perspe...
Driving Moore's Law with Python-Powered Machine Learning: An Insider's Perspe...Driving Moore's Law with Python-Powered Machine Learning: An Insider's Perspe...
Driving Moore's Law with Python-Powered Machine Learning: An Insider's Perspe...
 
Once-for-All: Train One Network and Specialize it for Efficient Deployment
 Once-for-All: Train One Network and Specialize it for Efficient Deployment Once-for-All: Train One Network and Specialize it for Efficient Deployment
Once-for-All: Train One Network and Specialize it for Efficient Deployment
 
FrackingPaper
FrackingPaperFrackingPaper
FrackingPaper
 
Transformer Seq2Sqe Models: Concepts, Trends & Limitations (DLI)
Transformer Seq2Sqe Models: Concepts, Trends & Limitations (DLI)Transformer Seq2Sqe Models: Concepts, Trends & Limitations (DLI)
Transformer Seq2Sqe Models: Concepts, Trends & Limitations (DLI)
 
Comprehensive Performance Evaluation on Multiplication of Matrices using MPI
Comprehensive Performance Evaluation on Multiplication of Matrices using MPIComprehensive Performance Evaluation on Multiplication of Matrices using MPI
Comprehensive Performance Evaluation on Multiplication of Matrices using MPI
 
Bh36352357
Bh36352357Bh36352357
Bh36352357
 
Parallelization of Coupled Cluster Code with OpenMP
Parallelization of Coupled Cluster Code with OpenMPParallelization of Coupled Cluster Code with OpenMP
Parallelization of Coupled Cluster Code with OpenMP
 
cug2011-praveen
cug2011-praveencug2011-praveen
cug2011-praveen
 

More from Satoru Katsumata

Exploiting Monolingual Data at Scale for Neural Machine Translation
Exploiting Monolingual Data at Scale for Neural Machine TranslationExploiting Monolingual Data at Scale for Neural Machine Translation
Exploiting Monolingual Data at Scale for Neural Machine Translation
Satoru Katsumata
 
Incorporating Syntactic and Semantic Information in Word Embeddings using Gra...
Incorporating Syntactic and Semantic Information in Word Embeddings using Gra...Incorporating Syntactic and Semantic Information in Word Embeddings using Gra...
Incorporating Syntactic and Semantic Information in Word Embeddings using Gra...
Satoru Katsumata
 
How Contextual are Contextualized Word Representations?
How Contextual are Contextualized Word Representations?How Contextual are Contextualized Word Representations?
How Contextual are Contextualized Word Representations?
Satoru Katsumata
 
Word-node2vec
Word-node2vecWord-node2vec
Word-node2vec
Satoru Katsumata
 
Corpora Generation for Grammatical Error Correction
Corpora Generation for Grammatical Error CorrectionCorpora Generation for Grammatical Error Correction
Corpora Generation for Grammatical Error Correction
Satoru Katsumata
 
Understanding Back-Translation at Scale
Understanding Back-Translation at ScaleUnderstanding Back-Translation at Scale
Understanding Back-Translation at Scale
Satoru Katsumata
 
2018年度レトリバインターン参加報告
2018年度レトリバインターン参加報告2018年度レトリバインターン参加報告
2018年度レトリバインターン参加報告
Satoru Katsumata
 
On the Limitations of Unsupervised Bilingual Dictionary Induction
On the Limitations of Unsupervised Bilingual Dictionary InductionOn the Limitations of Unsupervised Bilingual Dictionary Induction
On the Limitations of Unsupervised Bilingual Dictionary Induction
Satoru Katsumata
 
Guiding neural machine translation with retrieved translation pieces
Guiding neural machine translation with retrieved translation piecesGuiding neural machine translation with retrieved translation pieces
Guiding neural machine translation with retrieved translation pieces
Satoru Katsumata
 
Bi-Directional Block Self-Attention for Fast and Memory-Efficient Sequence Mo...
Bi-Directional Block Self-Attention for Fast and Memory-Efficient Sequence Mo...Bi-Directional Block Self-Attention for Fast and Memory-Efficient Sequence Mo...
Bi-Directional Block Self-Attention for Fast and Memory-Efficient Sequence Mo...
Satoru Katsumata
 
Memory-augmented Neural Machine Translation
Memory-augmented Neural Machine TranslationMemory-augmented Neural Machine Translation
Memory-augmented Neural Machine Translation
Satoru Katsumata
 
Deep Neural Machine Translation with Linear Associative Unit
Deep Neural Machine Translation with Linear Associative UnitDeep Neural Machine Translation with Linear Associative Unit
Deep Neural Machine Translation with Linear Associative Unit
Satoru Katsumata
 
A convolutional encoder model for neural machine translation
A convolutional encoder model for neural machine translationA convolutional encoder model for neural machine translation
A convolutional encoder model for neural machine translation
Satoru Katsumata
 

More from Satoru Katsumata (13)

Exploiting Monolingual Data at Scale for Neural Machine Translation
Exploiting Monolingual Data at Scale for Neural Machine TranslationExploiting Monolingual Data at Scale for Neural Machine Translation
Exploiting Monolingual Data at Scale for Neural Machine Translation
 
Incorporating Syntactic and Semantic Information in Word Embeddings using Gra...
Incorporating Syntactic and Semantic Information in Word Embeddings using Gra...Incorporating Syntactic and Semantic Information in Word Embeddings using Gra...
Incorporating Syntactic and Semantic Information in Word Embeddings using Gra...
 
How Contextual are Contextualized Word Representations?
How Contextual are Contextualized Word Representations?How Contextual are Contextualized Word Representations?
How Contextual are Contextualized Word Representations?
 
Word-node2vec
Word-node2vecWord-node2vec
Word-node2vec
 
Corpora Generation for Grammatical Error Correction
Corpora Generation for Grammatical Error CorrectionCorpora Generation for Grammatical Error Correction
Corpora Generation for Grammatical Error Correction
 
Understanding Back-Translation at Scale
Understanding Back-Translation at ScaleUnderstanding Back-Translation at Scale
Understanding Back-Translation at Scale
 
2018年度レトリバインターン参加報告
2018年度レトリバインターン参加報告2018年度レトリバインターン参加報告
2018年度レトリバインターン参加報告
 
On the Limitations of Unsupervised Bilingual Dictionary Induction
On the Limitations of Unsupervised Bilingual Dictionary InductionOn the Limitations of Unsupervised Bilingual Dictionary Induction
On the Limitations of Unsupervised Bilingual Dictionary Induction
 
Guiding neural machine translation with retrieved translation pieces
Guiding neural machine translation with retrieved translation piecesGuiding neural machine translation with retrieved translation pieces
Guiding neural machine translation with retrieved translation pieces
 
Bi-Directional Block Self-Attention for Fast and Memory-Efficient Sequence Mo...
Bi-Directional Block Self-Attention for Fast and Memory-Efficient Sequence Mo...Bi-Directional Block Self-Attention for Fast and Memory-Efficient Sequence Mo...
Bi-Directional Block Self-Attention for Fast and Memory-Efficient Sequence Mo...
 
Memory-augmented Neural Machine Translation
Memory-augmented Neural Machine TranslationMemory-augmented Neural Machine Translation
Memory-augmented Neural Machine Translation
 
Deep Neural Machine Translation with Linear Associative Unit
Deep Neural Machine Translation with Linear Associative UnitDeep Neural Machine Translation with Linear Associative Unit
Deep Neural Machine Translation with Linear Associative Unit
 
A convolutional encoder model for neural machine translation
A convolutional encoder model for neural machine translationA convolutional encoder model for neural machine translation
A convolutional encoder model for neural machine translation
 

Recently uploaded

general properties of oerganologametal.ppt
general properties of oerganologametal.pptgeneral properties of oerganologametal.ppt
general properties of oerganologametal.ppt
IqrimaNabilatulhusni
 
Remote Sensing and Computational, Evolutionary, Supercomputing, and Intellige...
Remote Sensing and Computational, Evolutionary, Supercomputing, and Intellige...Remote Sensing and Computational, Evolutionary, Supercomputing, and Intellige...
Remote Sensing and Computational, Evolutionary, Supercomputing, and Intellige...
University of Maribor
 
Nutraceutical market, scope and growth: Herbal drug technology
Nutraceutical market, scope and growth: Herbal drug technologyNutraceutical market, scope and growth: Herbal drug technology
Nutraceutical market, scope and growth: Herbal drug technology
Lokesh Patil
 
DMARDs Pharmacolgy Pharm D 5th Semester.pdf
DMARDs Pharmacolgy Pharm D 5th Semester.pdfDMARDs Pharmacolgy Pharm D 5th Semester.pdf
DMARDs Pharmacolgy Pharm D 5th Semester.pdf
fafyfskhan251kmf
 
Leaf Initiation, Growth and Differentiation.pdf
Leaf Initiation, Growth and Differentiation.pdfLeaf Initiation, Growth and Differentiation.pdf
Leaf Initiation, Growth and Differentiation.pdf
RenuJangid3
 
What is greenhouse gasses and how many gasses are there to affect the Earth.
What is greenhouse gasses and how many gasses are there to affect the Earth.What is greenhouse gasses and how many gasses are there to affect the Earth.
What is greenhouse gasses and how many gasses are there to affect the Earth.
moosaasad1975
 
GBSN- Microbiology (Lab 3) Gram Staining
GBSN- Microbiology (Lab 3) Gram StainingGBSN- Microbiology (Lab 3) Gram Staining
GBSN- Microbiology (Lab 3) Gram Staining
Areesha Ahmad
 
3D Hybrid PIC simulation of the plasma expansion (ISSS-14)
3D Hybrid PIC simulation of the plasma expansion (ISSS-14)3D Hybrid PIC simulation of the plasma expansion (ISSS-14)
3D Hybrid PIC simulation of the plasma expansion (ISSS-14)
David Osipyan
 
Richard's aventures in two entangled wonderlands
Richard's aventures in two entangled wonderlandsRichard's aventures in two entangled wonderlands
Richard's aventures in two entangled wonderlands
Richard Gill
 
Unveiling the Energy Potential of Marshmallow Deposits.pdf
Unveiling the Energy Potential of Marshmallow Deposits.pdfUnveiling the Energy Potential of Marshmallow Deposits.pdf
Unveiling the Energy Potential of Marshmallow Deposits.pdf
Erdal Coalmaker
 
Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...
Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...
Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...
Ana Luísa Pinho
 
Lateral Ventricles.pdf very easy good diagrams comprehensive
Lateral Ventricles.pdf very easy good diagrams comprehensiveLateral Ventricles.pdf very easy good diagrams comprehensive
Lateral Ventricles.pdf very easy good diagrams comprehensive
silvermistyshot
 
GBSN - Microbiology (Lab 4) Culture Media
GBSN - Microbiology (Lab 4) Culture MediaGBSN - Microbiology (Lab 4) Culture Media
GBSN - Microbiology (Lab 4) Culture Media
Areesha Ahmad
 
Hemostasis_importance& clinical significance.pptx
Hemostasis_importance& clinical significance.pptxHemostasis_importance& clinical significance.pptx
Hemostasis_importance& clinical significance.pptx
muralinath2
 
bordetella pertussis.................................ppt
bordetella pertussis.................................pptbordetella pertussis.................................ppt
bordetella pertussis.................................ppt
kejapriya1
 
platelets_clotting_biogenesis.clot retractionpptx
platelets_clotting_biogenesis.clot retractionpptxplatelets_clotting_biogenesis.clot retractionpptx
platelets_clotting_biogenesis.clot retractionpptx
muralinath2
 
nodule formation by alisha dewangan.pptx
nodule formation by alisha dewangan.pptxnodule formation by alisha dewangan.pptx
nodule formation by alisha dewangan.pptx
alishadewangan1
 
BLOOD AND BLOOD COMPONENT- introduction to blood physiology
BLOOD AND BLOOD COMPONENT- introduction to blood physiologyBLOOD AND BLOOD COMPONENT- introduction to blood physiology
BLOOD AND BLOOD COMPONENT- introduction to blood physiology
NoelManyise1
 
Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...
Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...
Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...
Sérgio Sacani
 
role of pramana in research.pptx in science
role of pramana in research.pptx in sciencerole of pramana in research.pptx in science
role of pramana in research.pptx in science
sonaliswain16
 

Recently uploaded (20)

general properties of oerganologametal.ppt
general properties of oerganologametal.pptgeneral properties of oerganologametal.ppt
general properties of oerganologametal.ppt
 
Remote Sensing and Computational, Evolutionary, Supercomputing, and Intellige...
Remote Sensing and Computational, Evolutionary, Supercomputing, and Intellige...Remote Sensing and Computational, Evolutionary, Supercomputing, and Intellige...
Remote Sensing and Computational, Evolutionary, Supercomputing, and Intellige...
 
Nutraceutical market, scope and growth: Herbal drug technology
Nutraceutical market, scope and growth: Herbal drug technologyNutraceutical market, scope and growth: Herbal drug technology
Nutraceutical market, scope and growth: Herbal drug technology
 
DMARDs Pharmacolgy Pharm D 5th Semester.pdf
DMARDs Pharmacolgy Pharm D 5th Semester.pdfDMARDs Pharmacolgy Pharm D 5th Semester.pdf
DMARDs Pharmacolgy Pharm D 5th Semester.pdf
 
Leaf Initiation, Growth and Differentiation.pdf
Leaf Initiation, Growth and Differentiation.pdfLeaf Initiation, Growth and Differentiation.pdf
Leaf Initiation, Growth and Differentiation.pdf
 
What is greenhouse gasses and how many gasses are there to affect the Earth.
What is greenhouse gasses and how many gasses are there to affect the Earth.What is greenhouse gasses and how many gasses are there to affect the Earth.
What is greenhouse gasses and how many gasses are there to affect the Earth.
 
GBSN- Microbiology (Lab 3) Gram Staining
GBSN- Microbiology (Lab 3) Gram StainingGBSN- Microbiology (Lab 3) Gram Staining
GBSN- Microbiology (Lab 3) Gram Staining
 
3D Hybrid PIC simulation of the plasma expansion (ISSS-14)
3D Hybrid PIC simulation of the plasma expansion (ISSS-14)3D Hybrid PIC simulation of the plasma expansion (ISSS-14)
3D Hybrid PIC simulation of the plasma expansion (ISSS-14)
 
Richard's aventures in two entangled wonderlands
Richard's aventures in two entangled wonderlandsRichard's aventures in two entangled wonderlands
Richard's aventures in two entangled wonderlands
 
Unveiling the Energy Potential of Marshmallow Deposits.pdf
Unveiling the Energy Potential of Marshmallow Deposits.pdfUnveiling the Energy Potential of Marshmallow Deposits.pdf
Unveiling the Energy Potential of Marshmallow Deposits.pdf
 
Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...
Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...
Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...
 
Lateral Ventricles.pdf very easy good diagrams comprehensive
Lateral Ventricles.pdf very easy good diagrams comprehensiveLateral Ventricles.pdf very easy good diagrams comprehensive
Lateral Ventricles.pdf very easy good diagrams comprehensive
 
GBSN - Microbiology (Lab 4) Culture Media
GBSN - Microbiology (Lab 4) Culture MediaGBSN - Microbiology (Lab 4) Culture Media
GBSN - Microbiology (Lab 4) Culture Media
 
Hemostasis_importance& clinical significance.pptx
Hemostasis_importance& clinical significance.pptxHemostasis_importance& clinical significance.pptx
Hemostasis_importance& clinical significance.pptx
 
bordetella pertussis.................................ppt
bordetella pertussis.................................pptbordetella pertussis.................................ppt
bordetella pertussis.................................ppt
 
platelets_clotting_biogenesis.clot retractionpptx
platelets_clotting_biogenesis.clot retractionpptxplatelets_clotting_biogenesis.clot retractionpptx
platelets_clotting_biogenesis.clot retractionpptx
 
nodule formation by alisha dewangan.pptx
nodule formation by alisha dewangan.pptxnodule formation by alisha dewangan.pptx
nodule formation by alisha dewangan.pptx
 
BLOOD AND BLOOD COMPONENT- introduction to blood physiology
BLOOD AND BLOOD COMPONENT- introduction to blood physiologyBLOOD AND BLOOD COMPONENT- introduction to blood physiology
BLOOD AND BLOOD COMPONENT- introduction to blood physiology
 
Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...
Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...
Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...
 
role of pramana in research.pptx in science
role of pramana in research.pptx in sciencerole of pramana in research.pptx in science
role of pramana in research.pptx in science
 

Lexically constrained decoding for sequence generation using grid beam search

  • 1. Lexically Constrained Decoding for Sequence Generation Using Grid Beam Search Chris Hokamp, Qun Liu (ACL 2017) B4 勝又 智
  • 2. Abstract ● They present Grid Beam Search (GBS), an algorithm which extends beam search to allow the inclusion of pre-specified lexical constraints. ● They demonstrate the feasibility and flexibility of Lexically Constrained Decoding by conducting 2 experiments. ○ Neural Interactive-Predictive Translation ○ Domain Adaptation for Neural Machine Translation ● Result ○ GBS can provide large improvements in translation quality in interactive scenarios. ○ even without any user input, GBS can be used to achieve significant gains in performance in domain adaptation scenarios. 2
  • 3. Introduction: What is this research? ● The output of many natural language processing model is a sequence of text. → Especially, they focus on machine translation usecases.† ○ such as Post-Editing (PE) and Interactive-Predictive MT ● Their goal is to find a way to force the output of a model to contain such lexical constraints. ● In this paper, they propose a decoding algorithm which allows the specification of subsequences to be present in a model’s output. 3 input model output (sequenc e) †: lexically constrained decoding is relevant to any scenario where a model is asked to generate a secuence given an input. (image description, dialog generation, abstractive summarization, question answering)
  • 4. Background: Beam Search 4algorithm: Kyoto University Participation to WAT 2016. Cromieres et al., WAT2016
  • 5. Grid Beam Search: Abstract ● their goal is to organize decoding in such a way that they can constrain the search space to outputs which contain one or more pre-specified sub-sequences. ○ want to place lexical constraints correctly and to generate the parts of the output which are not covered by the constraints. 5time step (t) Constraint Coverage (c) constraints is an array of sequences. ← constraints = [[Ihre, Rechte, müssen], [Abreise, geschützt]] more detail on the next slides →
  • 6. Grid Beam Search: Algorithm 6 numC: total number of tokens in all constaints k: beam window size (this pseudo-code is for any model) 1. open hypotheses can either generate from the model’s distribution or start constraints 2. closed hypotheses can only generate the next token for in a currently unfinished constraint 1. open hypotheses (Grid[t-1][c]) may generate continuations from the model’s distribution 2. open hypotheses (Grid[t-1][c-1]) may start new constraints 3. closed hypotheses (Grid[t-1][c-1]) may continue constraints
  • 7. Grid Beam Search: Deteil 7 time step (t) Constraint Coverage (c) a colored strip inside a beam box represents an individual hypothesis in the beam’s k-best stack. Hypotheses with circles are closed, all other hypotheses are open. ← conceptual drawing
  • 8. Grid Beam Search: Multi-token Constraints ● By distinguishing between open and closed hypotheses, → allow for arbitrary multi-token phrases in the search. ● Each hypothesis maintains a coverage vector. ○ ensure that constraints cannot be repeated in a search path (hypotheses which have already covered constrainti can only generate or start new constraints) ● Discontinuous lexical constraints such as phrasal verbs in En or Ge ○ it is easy to apply GBS by adding filters to the search ↓ 8 constraint ask <someone> out constraint1 ask constraint2 out constraint1 cannot be used before constraint0 there must be at least one generated token between the constraints. use 2 filters
  • 9. Grid Beam Search: Addressing unknown words ● their decoder can handle arbitrary constraints. → there is a risk that constraints will contain unknown tokens. ○ Especially in domain adaptation scenarios, some user-specified constraints are very likely to contain unseen tokens. → Use Subword Units. (maybe Byte Pair Encoding...) (In the experiments, they use this technique to ensure that no input tokens are unknown, even if a constraint contains words which never appeared in the training data.) 9
  • 10. Grid Beam Search: Effciency ● the runtime complexity of a naive implementation of GBS is O(ktc). (standard time-based beam search is O(tk)) ↓ some consideration must be given to the efficiency of this algorithm ● the beams in each column c of this Figure are independent. → GBS can be parallelized to allow all beams at each timestep to be filled simultaneously. ● they find that the most time is spent computing the states for the hyp candidates. → by keeping the beam size small, they can make GBS significantly faster. 10
  • 11. Experiments setup (including appendix) ● The models are SOTA NMT systems using our own implementation of NMT with attention over the source sequence (Bahdnau et al., 2014). ○ bidirectional GRUs ○ AdaDelta ○ train all systems for 500,000 iterations, with validation every 5,000 steps, best single model from validation is used. ○ use ℓ2 regularization (α = 1e^(-5)) ○ Dropout is used on the output layers, rate is 0.5. ○ beam size is 10. ● corpus ○ English-German ■ 4.4M segments from the Europarl and CommonCrawl ○ English-French ■ 4.9M segments from the Europarl and CommonCrawl ○ English-Portuguese ■ 28.5M segments from the Europarl, JRC-Aquis and OpenSubtitles ● subword ○ they created a shared subword representation for each language pair by extracting a vocabulary of 80,000 symbols from the concatenated source and target data. 11
  • 12. Experiments: Pick-Revise for Interactive PE ● Pick-Revise is an interaction cycle for MT Post-Editing. ● they modify this experiments. ○ the user only provides sequences of up to three words which are missing from the hypothesis from reference. ○ in this experiments, four cycles. ○ strict setting: the complete phrase must be missing from the hypothesis. ○ relaxed setting: only the first word must be missing. ○ when a 3-token phrase cannot be found, they backoff to 2-token phrases, then to single tokens as constraints. ○ If a hypothesis already matches the reference, no constraints are added. 12fig1: PRIMT: A Pick-Revise Framework for Interactive Machine Translation. Cheng et al., NAACL HLT 2016
  • 13. Result: Pick-Revise for Interactive PE 13
  • 14. Experiments: Domain Adaptation via Terminology ● The requirement for use of domain-specific terminologies is common in real world applications of MT. → provide the term as constraints. ● target domain data ○ Autodesk Post-Editing corpus ↑ focused upon sofware localization, very dfferent from train data. ● Extract Terminology ○ devide the corpus into approximately 100,000 training sents, 1,000 test segments. automatically generate a terminology by computing PMI between source and target n-grams in training set. ○ extract all n-grams from length 2-5 as terminology candidates. ○ filter the candidates to only include pairs whose PMI is >= 0.9 and where both src and tgt occur in at least 5 times. → make the set of the terminology. 14 normarized to [-1, +1]
  • 15. Results: Domain Adaptation via Terminology ● confirming that this improvements is caused by the placement of the terms by GBS, they evaluate the other inserting methods. ○ randomly inserting terms to the baseline output ○ prepending terms to the baseline output ● even an automatically created terminology yield improvements. ○ manually created domain terminologies are likely to lead to greater gains. 15 test set src tgt terminology phrase corrsponding phrase constraints for that segmentsadd ↑ conceptual drawing for previous slide
  • 16. Output Analysis 16 phrases added as constraints have global effects upon trainslation quolity.
  • 17. related work ● Most related work has presented modifications of SMT systems for specific usecases. (focusing on prefix and sufix) ○ The largest body of work considers Interactive Machine Translation (IMT) ● some attention has also been given to SMT decoding with multiple lexical constraints. (Cheng et al.,2016) ○ their approach relies upon the phrase segmentation provided by the SMT system. ○ the decoding algorithm can only make use of constraints that match phrase boundaries. In contrast, GBS approach decodes at the token level. ● “To the best of our knowledge, ours is the first work which considers general lexically constrained decoding for any model which outputs sequences, without relying upon alignments between input and output, and without using a search organized by coverage of the input.” 17
  • 18. Conclusion ● Lexically constrained decoding is a flexible way to incorporate arbitrary subsequences into the output of any model that generates output sequences token-by-token. ● in interctive translation, user inputs can be used as constraints, generating a new output each time a user fixes an error. ○ such a workflow can provide a large improvement in translation quality. ● using a domain-specific terminology to generate target-side constraints ○ a general domain model can be adapted to a new domain without anay retraining ● future work ○ evaluate GBS with models outside of MT, such as automatic summarization, image captioning or dialog generation ○ introduce new constraint-aware models (via secondary attention mecanisms over lexical constraints) 18
  • 19. reference ● their implementation of GBS ○ https://github.com/chrishokamp/constrained_decoding ● Kyoto University Participation to WAT 2016. Cromieres et al., WAT2016 ● PRIMT: A Pick-Revise Framework for Interactive Machine Translation. Cheng et al., NAACL HLT 2016 19