In beginning there was the "rule based" machine translation, like Babelfish, that didn't work at all. Then came the Statistical Machine translation, powering the like of Google Translate, and all was good. Nowadays, it's all about Deep Learning and the Neural Machine Translation is the state of the art, with unmatched translation fluency. Let's dive into the internals of a Neural Machine Translation system, explaining the principles and the advantages over the past.
2. Who we are
● Founded in 2001;
● Branches in Milan, Rome and London;
● Market leader in enterprise ready solutions based on Open Source tech;
● Expertise:
○ Open Source
○ DevOps
○ Public and private cloud
○ Search
○ BigData and many more...
3. This presentation is Open Source (yay!)
https://creativecommons.org/licenses/by-nc-sa/3.0/
5. Statistical Machine Translation
Translating as a ciphered message recovery through probability laws:
1. Foreign language as a noisy channel
2. Language model and Translation model
3. Training (building the translation model)
4. Decoding (translating with the translation model)
6. Noisy channel model
Goal
Translate a sentence in foreign language f to our language e:
The abstract model
1. Transmit e over a noisy channel.
2. Channel garbles sentence and f is received.
3. Try to recover e by thinking about:
a. how likely is that e was the message, p(e) (source model)
b. how f is turned into e, p(e|f) (channel model)
7. Word choice and word reordering
P(f|e) cares about words, in any order.
● “It’s too late” → “Tardi troppo è” ✓
● “It’s too late” → “È troppo tardi” ✓
● “It’s too late” → “È troppa birra” ✗
P(e) cares about words order.
● “È troppo tardi” ✓
● “Tardi troppo è” ✗
9. P(e) comes from a Language model, a machine that assigns scores to
sentences, estimating their likelihood.
1. Record every sentence ever said in English (1 Billion?)
2. If the sentence “how’s it going?” appears 76413 times in that database, then
we say:
Language model
10. Translation model
Next we need to worry about P(f|e), the probability of a French string f given an
English string e.
This is called a translation model.
It boils down to computing alignments between source and target languages.
12. Training Data
A parallel corpus is a collection of texts, each of which is translated into one or
more other languages than the original.
EN IT
Look at that! Guarda lì!
I' ve never seen anything like that! Non ho mai visto nulla di simile!
That's incredible! É incredibile!
That's terrific. É eccezionale.
13. Computing alignments: Expectation Maximization
This algorithm iterates over data,
exacerbating latent properties of a
system.
It finds a local optimum convergence
point without any user supervision.
Example with a 2 sentence corpus:
b c
b y
yx
14. Decoding
Now it’s time to decode our string encoded by the noisy channel.
Word alignments are leveraged to build a “space” for a search algorithm.
Translating is searching in a space of options.
16. Decoding in action
1. The algorithm builds the search space as a tree of options, sorted by p(e|f).
a. Search space is limited to a fixed size named “beam”.
2. Each option is picked on highest probability first.
a. Reordering adds a penalty.
b. Language model penalizes each stage output.
3. Translation stops when all source words are translated, or covered.
18. Neural machine translation
NMT is based on probability too, but has some differences:
● End-to-end training: no more separate Translation + Language Models.
● Markovian assumption, instead of Naive Bayesian: words move together.
If a sentence f of length n is a sequence of words , then p(f) is:
19. Neural network review: feed-forward
Weighted links determine the strength a neuron can influence its neighbours.
Deviation between outputs and expected values affects rebalancing of weights.
But a feed forward network is not suitable to map the temporal dependencies
between words. We need an architecture than can explicitly map sequences.
22. Encoder - Decoder architecture
With a sentence f and e :
(one single sequence)
Languages are independent (vocabulary and domain), so
we can split in 2 separate RNNs:
1. (summary vector of source)
2. Each new word depends on history
24. Summary vector as information bottleneck
Fixed sized representation degrades as sentence length increases.
This is because the alignment learning operates on many-to-many logic.
Gradient flows towards everybody for any alignment mistake.
Let’s gate gradient flow through a context vector, as a weighted average of
source hidden states (also known as “soft search” or “attention”).
Weights computed by feed-forward network with softmax activation.
25. Attention model
THE WAITER TOOK THE PLATES
h h h h h
g g g g g
IL
CAMERI
ERE
PRESE I PIATTI
+
0.7 0.05
0.1 0.050.1
26. Attention model
THE WAITER TOOK THE PLATES
h h h h h
g g g g g
IL
CAMERI
ERE
PRESE I PIATTI
+
0.1 0.05
0.1 0.050.7
27. Attention model
THE WAITER TOOK THE PLATES
h h h h h
g g g g g
IL
CAMERI
ERE
PRESE I PIATTI
+
0.05 0.05
0.7 0.10.1
28. Attention model
THE WAITER TOOK THE PLATES
h h h h h
g g g g g
IL
CAMERI
ERE
PRESE I PIATTI
+
0.05 0.1
0.1 0.70.05
29. Attention model
THE WAITER TOOK THE PLATES
h h h h h
g g g g g
IL
CAMERI
ERE
PRESE I PIATTI
+
0.05 0.7
0.1 0.10.05
30. Neural domain adaptation
Sometimes we want our network to assume a
particular style, but we don’t have enough data.
Solution: adapt an already trained network.
1. First, train the full network with general data to
obtain a general model.
2. Then, train last layers on new data to have it
influence stylistically the output.
31. Zero shot translation: Google Neural MT
We can use a single system for
multilingual MT: just feed all the different
parallel data inside the same system.
Tag input data with desired target
language: NMT will translate in target
language!
As a side effect, we build an internal
“shared knowledge representation”.
This enables to translate between unseen
language pairs.
GNMT
French English
German Italian
<2IT> I am here<2DE> je suis ici
Sono quiIch bin hier
FR → DE
EN → IT
EN → DE?
32.
33. Unsupervised NMT
We can translate even without parallel data, using just two monolingual corpora.
Each corpus builds a latent semantic space. Similar languages build similar spaces.
Translation as geometrical mapping between affine latent semantic spaces.
x z
encoder
decoder
source sentence latent space
target sentence
ydecoder
auto encoder
x^