Successfully reported this slideshow.
Your SlideShare is downloading. ×

Searching for the best translation combination

Ad
Ad
Ad
Ad
Ad
Ad
Ad
Ad
Ad
Ad
Ad

Check these out next

1 of 28 Ad

More Related Content

Viewers also liked (20)

Similar to Searching for the best translation combination (20)

Advertisement

More from Matīss (20)

Recently uploaded (20)

Advertisement

Searching for the best translation combination

  1. 1. Searching for the Best Translation Combination Matīss Rikters, University of Latvia The 7th International Conference Human Language Technologies - the Baltic Perspective Riga, Latvia October 6, 2016
  2. 2. Contents Hybrid Machine Translation Multi-System Hybrid MT Simple combining of translations – Combining full whole translations – Combining translations of sentence chunks Combining translations of linguistically motivated chunks Searching for the best translation combination Other work Future plans
  3. 3. Hybrid Machine Translation Statistical rule generation – Rules for RBMT systems are generated from training corpora Multi-pass – Process data through RBMT first, and then through SMT Multi-System hybrid MT – Multiple MT systems run in parallel
  4. 4. Multi-System Hybrid MT Related work: SMT + RBMT (Ahsan and Kolachina, 2010) Confusion Networks (Barrault, 2010) – + Neural Network Model (Freitag et al., 2015) SMT + EBMT + TM + NE (Santanu et al., 2014) Recursive sentence decomposition (Mellebeek et al., 2006)
  5. 5. Combining full whole translations – Translate the full input sentence with multiple MT systems – Choose the best translation as the output Combining translations of sentence chunks – Split the sentence into smaller chunks • The chunks are the top level subtrees of the syntax tree of the sentence – Translate each chunk with multiple MT systems – Choose the best translated chunks and combine them Combining Translations
  6. 6. Choose the best candidate KenLM (Heafield, 2011) calculates probabilities based on the observed entry with longest matching history 𝑤𝑓 𝑛 : 𝑝 𝑤 𝑛 𝑤1 𝑛−1 = 𝑝 𝑤 𝑛 𝑤𝑓 𝑛−1 𝑖=1 𝑓−1 𝑏(𝑤𝑖 𝑛−1 ) where the probability 𝑝 𝑤 𝑛 𝑤𝑓 𝑛−1 and backoff penalties 𝑏(𝑤𝑖 𝑛−1 ) are given by an already-estimated language model. Perplexity is then calculated using this probability: where given an unknown probability distribution p and a proposed probability model q, it is evaluated by determining how well it predicts a separate test sample x1, x2... xN drawn from p.
  7. 7. Whole translations Choosing the best candidate: A 5-gram language model trained with – KenLM – JRC-Acquis corpus v. 3.0 (Steinberger, 2006) - 1.4 million Latvian legal domain sentences – Sentences are scored with the query program that comes with KenLM Test data – 1581 random sentences from the JRC-Acquis corpus – Tested with the ACCURAT balanced evaluation corpus - 512 general domain sentences (Skadiņš et al., 2010), but the results were not as good
  8. 8. Simple: – Berkeley Parser (Petrov et al., 2006) – Sentences are split into chunks from the top level subtrees of the syntax tree Linguistically motivated: – Traverse the syntax tree bottom up, from right to left – Add a word to the current chunk if • The current chunk is not too long (sentence word count / 4) • The word is non-alphabetic or only one symbol long • The word begins with a genitive phrase («of ») – Otherwise, initialize a new chunk with the word – In case when chunking results in too many chunks, repeat the process, allowing more (than sentence word count / 4) words in a chunk Changes in the MT API systems: – LetsMT API swapped with Hugo.lv API – Added Yandex API 12-gram LM trained with – DGT-Translation Memory corpus (Steinberger, 2011) – 3.1 million Latvian legal domain sentences Chunks
  9. 9. Teikumu dalīšana tekstvienībās Tulkošana ar tiešsaistes MT API Google Translate Bing Translator LetsMT Labākā tulkojuma izvēle Tulkojuma izvade Sentence tokenization Translation with online MT Selection of the best translation Output Whole translations
  10. 10. Teikumu dalīšana tekstvienībās Tulkošana artiešsaistes MT API Google Translate Bing Translator LetsMT Labāko fragmentu izvēle Tulkojumu izvade Teikumu sadalīšana fragmentos Sintaktiskā analīze Teikumu apvienošana Sentence tokenization Translation with online MT Selection of the best chunks Output Syntactic analysis Sentence chunking Sentence recomposition Chunks
  11. 11. Sentence chunking
  12. 12. Simple chunks Linguistically motivated chunks • Recently • there • has been an increased interest in the automated discovery of equivalent expressions in different languages • . • Recently there has been an increased interest • in the automated discovery of equivalent expressions • in different languages . Example sentence
  13. 13. Example sentence Recently there has been an increased interest in the automated discovery of equivalent expressions in different languages .
  14. 14. Example sentence
  15. 15. Whole translations System BLEU Hybrid selection Google Bing LetsMT Equal Google Translate 16.92 100 % - - - Bing Translator 17.16 - 100 % - - LetsMT 28.27 - - 100 % - Hybrid Google + Bing 17.28 50.09 % 45.03 % - 4.88 % Hybrid Google + LetsMT 22.89 46.17 % - 48.39 % 5.44 % Hybrid LetsMT + Bing 22.83 - 45.35 % 49.84 % 4.81 % Hybrid Google + Bing + LetsMT 21.08 28.93 % 34.31 % 33.98 % 2.78 % May 2015 results (Rikters 2015)
  16. 16. System BLEU Hybrid selection Whole translations Simple chunks Google Bing LetsMT Google Translate 18.09 100% - - Bing Translator 18.87 - 100% - LetsMT 30.28 - - 100% Simple Chunks G + B 18.73 21.27 74% 26% - Simple Chunks G + L 24.50 26.24 25% - 75% Simple Chunks L + B 24.66 26.63 - 24% 76% Simple Chunks G + B + L 22.69 24.72 17% 18% 65% September 2015 (Rikters and Skadiņa 2016(1)) Simple chunks
  17. 17. System BLEU Equal Bing Google Hugo Yandex BLEU - - 17.43 17.73 17.14 16.04 Whole translations G + B 17.70 7.25% 43.85% 48.90% - - Whole translations G + B + H 17.63 3.55% 33.71% 30.76% 31.98% - Simple Chunks G + B 17.95 4.11% 19.46% 76.43% - - Simple Chunks G + B + H 17.30 3.88% 15.23% 19.48% 61.41% - Linguistic Chunks G + B 18.29 22.75% 39.10% 38.15% - - Linguistic Chunks G + B + H + Y 19.21 7.36% 30.01% 19.47% 32.25% 10.91% Linguistically motivated chunks January 2016 (Rikters and Skadiņa 2016(2))
  18. 18. Searching for the best The main differences: • the manner of scoring chunks with the LM and selecting the best translation • utilisation of multi-threaded computing that allows to run the process on all available CPU cores in parallel • still very slow
  19. 19. Searching for the best Chunks Combinations Count Percentage Legal General Legal General 1 4 210 16 13.28% 3.13% 2 16 178 78 11.26% 15.23% 3 64 262 131 16.57% 25.59% 4 256 273 127 17.27% 24.80% 5 1024 275 94 17.39% 18.36% 6 4096 201 47 12.71% 9.18% 7 16384 96 11 6.07% 2.15% 8 65536 49 6 3.10% 1.17% 9 262144 37 2 2.34% 0.39%
  20. 20. Searching for the best Legal domain General domain
  21. 21. Searching for the best System BLEU Legal General Full-search 23.61 14.40 Linguistic chunks 20.00 17.27 Bing 16.99 17.43 Google 16.19 17.72 Hugo 20.27 17.13 Yandex 19.75 16.03 May 2016
  22. 22. Neural Network LM 16.00 17.00 18.00 19.00 20.00 21.00 22.00 23.00 24.00 25.00 15.00 20.00 25.00 30.00 35.00 40.00 45.00 50.00 0.11 0.20 0.32 0.41 0.50 0.61 0.70 0.79 0.88 1.00 1.09 1.20 1.29 1.40 1.47 1.56 1.67 1.74 1.77 BLEU Perplexity Epoch Perplexity BLEU-HY Linear (BLEU-HY)
  23. 23. Neural Network LM 13.30 13.80 14.30 14.80 15.30 15.80 16.30 15.00 20.00 25.00 30.00 35.00 40.00 45.00 50.00 0.11 0.20 0.32 0.41 0.50 0.61 0.70 0.79 0.88 1.00 1.09 1.20 1.29 1.40 1.47 1.56 1.67 1.74 1.77 BLEU Perplexity Epoch Perplexity BLEU Linear (BLEU)
  24. 24. More enhancements for the chunking step Add special processing of multi-word expressions (MWEs) Try out other types of LMs – POS tag + lemma – Recurrent Neural Network Language Model (Mikolov et al., 2010) – Continuous Space Language Model (Schwenk et al., 2006) – Character-Aware Neural Language Model (Kim et al., 2015) Choose the best translation candidate with MT quality estimation – QuEst++ (Specia et al., 2015) – SHEF-NN (Shah et al., 2015) Handling MWEs in neural machine translation systems Future work
  25. 25. • Matīss Rikters "Multi-system machine translation using online APIs for English-Latvian" ACL-IJCNLP 2015 • Matīss Rikters and Inguna Skadiņa "Syntax-based multi-system machine translation" LREC 2016 • Matīss Rikters and Inguna Skadiņa "Combining machine translated sentence chunks from multiple MT systems" CICLing 2016 • Matīss Rikters "K-translate – interactive multi-system machine translation" Baltic DB&IS 2016 Related publications
  26. 26. Code on GitHub http://ej.uz/ChunkMT http://ej.uz/SyMHyT http://ej.uz/MSMT http://ej.uz/chunker Code on GitHub
  27. 27. References• Ahsan, A., and P. Kolachina. "Coupling Statistical Machine Translation with Rule-based Transfer and Generation, AMTA-The Ninth Conference of the Association for Machine Translation in the Americas." Denver, Colorado (2010). • Barrault, Loïc. "MANY: Open source machine translation system combination." The Prague Bulletin of Mathematical Linguistics 93 (2010): 147-155. • Heafield, Kenneth. "KenLM: Faster and smaller language model queries." Proceedings of the Sixth Workshop on Statistical Machine Translation. Association for Computational Linguistics, 2011. • Kim, Yoon, et al. "Character-aware neural language models." arXiv preprint arXiv:1508.06615 (2015). • Mellebeek, Bart, et al. "Multi-engine machine translation by recursive sentence decomposition." (2006). • Mikolov, Tomas, et al. "Recurrent neural network based language model." INTERSPEECH. Vol. 2. 2010. • Petrov, Slav, et al. "Learning accurate, compact, and interpretable tree annotation." Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics. Association for Computational Linguistics, 2006. • Raivis Skadiņš, Kārlis Goba, Valters Šics. 2010. Improving SMT for Baltic Languages with Factored Models. Proceedings of the Fourth International Conference Baltic HLT 2010, Frontiers in Artificial Intelligence and Applications, Vol. 2192. , 125-132. • Rikters, M., Skadiņa, I.: Syntax-based multi-system machine translation. LREC 2016. (2016) • Rikters, M., Skadiņa, I.: Combining machine translated sentence chunks from multiple MT systems. CICLing 2016. (2016) • Santanu, Pal, et al. "USAAR-DCU Hybrid Machine Translation System for ICON 2014" The Eleventh International Conference on Natural Language Processing. , 2014. • Schwenk, Holger, Daniel Dchelotte, and Jean-Luc Gauvain. "Continuous space language models for statistical machine translation." Proceedings of the COLING/ACL on Main conference poster sessions. Association for Computational Linguistics, 2006. • Shah, Kashif, et al. "SHEF-NN: Translation Quality Estimation with Neural Networks." Proceedings of the Tenth Workshop on Statistical Machine Translation. 2015. • Specia, Lucia, G. Paetzold, and Carolina Scarton. "Multi-level Translation Quality Prediction with QuEst++." 53rd Annual Meeting of the Association for Computational Linguistics and Seventh International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing: System Demonstrations. 2015. • Steinberger, Ralf, et al. "Dgt-tm: A freely available translation memory in 22 languages." arXiv preprint arXiv:1309.5226 (2013). • Steinberger, Ralf, et al. "The JRC-Acquis: A multilingual aligned parallel corpus with 20+ languages." arXiv preprint cs/0609058 (2006). References
  28. 28. Thank you! Thank you!

×