Panelists: Olga Beregovaya (Welocalize), Maxim Khalilov (Booking.com), Antonio Tejada (Capita), Tony O'Dowd (KantanMT), Carla Schelfhout (SDL)
In this session, with clear focus on Machine Translation (MT) quality, we will discuss different ways to improve MT engines. Which engine do you use and how do you measure improvement? What are the right metrics to evaluate MT quality for the specific content types? How do you interpret and act on the evaluation results? It's fine when errors are labeled and analyzed, but how can that help improve your engine? Are there best practices available? And how about Neural MT? Should we measure that differently? After some use cases shared by the speakers, these questions will be addressed in the break-out session.
2. TOPIC: How to pump up MT Quality
TAUS QE Summit – October 2016
3. Topics
Background
The need for Language Quality Review
Introducing Industry Standards
MQM – QTLaunchPad Project
DFKI, CNGL, University Sheffield, Athena, (GALA, FIT)
Automating the Process of Quality Evaluation
KantanLQR
A Cloud-based Platform that engages with Professional
Translators in the development of KantanMT engines
4. Background
Background
One of the biggest challenges in deploying Custom Machine
Translation is measuring translation quality
How can you develop a formalised mechanism to
determine translation quality?
More importantly, how can you formalise the measurement
of translation quality that delivers usable metrics that drive
a deeper understanding of how your engine will perform in
production?
5. Using Industry Standards
Multidimensional
Quality Metrics (MQM)
European Commission-
funded
DFKI, CNGL DCU,
University Sheffield,
Athena (Inputs from GALA, FIT)
Focus
Customised quality
metrics for human and
machine translation
quality evaluation