2024 mega trends for the digital workplace - FINAL.pdf
Opening talk: Quality Evaluation at the EU - Ingemar Strandvik (European Commission)
1. Quality Evaluation at the EC
Ingemar Strandvik, Quality Manager,
Directorate-General for Translation (DGT),
European Commission
Towards a more structured approach to
Quality, Q Assurance and Q Evaluation
3. Towards a more structured approach to Q and QA
Resource constraints: 'effectiveness and efficiency'; 'do more
with less'; 'do the right things and do them right', …
Increased use of outsourcing: need to uphold the 'quality
chain'; need for consistent treatment and Q evaluation
Extension and deepening of the European cooperation: from
legislation to web texts, issues with the drafting quality on the
political agenda since 1992 (Transparency, Better Regulation …)
4. Quality Management Framework : definition of Q
Definition of translation quality as "fitness for purpose",
"fulfilling needs and expectations of requesters, end users and
other relevant stakeholders".
Quality as compliance with appropriate specifications: Always
high quality, not in absolute terms, but the right mix of quality
criteria for the text at hand (cf. "good-enough quality").
5. Translation Quality Guidelines: purposes and risks
A. Legal documents
B. Policy and administrative documents
C. Information for the public
D. Input for EU legislation, policy formulation and
administration
6. Outsourcing as a 'real resource' – How to integrate
outsourcing without breaking the 'quality chain'?
Framework contracts:
Q/price ratio 70/30;
Q evaluation 10% sample, 5 marks, feedback;
"standard" error categorization, two severity levels (high and
low relevance) based on effect on usability,
dynamic ranking, ....
7. Challenge: Consistent Q Evaluation
Common understanding of Q, error categories and severity
levels? 1500 evaluators: need for guidelines, training,
validation, monitoring.
We assess compliance with the quality requirements of the
tender specifications:
text should be usable as it stands = usability criteria
no further intervention needed from DGT's side
8. Consistent Q evaluation
Definitions of revision and review: check suitability for
intended purpose (ISO 17100)
Revision (just as translation) requires domain competence,
text-type competence, etc. (ISO 17100)
If this competence is not present,
the understanding will not be the same, and
Inter-rater reliability will suffer.
9. QE tools, metrics and objectivity
Manual or tool-based evaluation?; holistic or analytic?;
qualitative or quantitative? Metrics?
LISA QA, DQF, TQA, MQM
Subjective assessments fed into a tool are not more objective
because they are calculated objectively…
Inter-rater reliability requires common understanding
10. Challenge: over-rating, overdoing the QC and wasting
taxpayers' money?
90% rated 'very good' and 'good', but 66% further quality
controlled internally, despite 'usable as they stand':
Overly lenient marking practice?
Natural result of the risk assessment (ABCD)?
Sample size of 10% enough for a reliable assessment of
whether the text is usable as it stands?
Specifications (translation briefs), Feedback; Communication,
Anonymity, Transparency; Granularity and traceability…
11. Cost-efficiency: Is there a breaking point,
… where we distort professional working methods;
… where the required quality can no longer be guaranteed;
(equally authentic versions of EU law, a "constitutional right",
the price of democracy, cf. elections);
… where the reliability of the tools is jeopardised because we
start feeding them with unrevised texts and where this may
affect MT and future productivity?
12. Conclusion: Benchmarking – Use for standards
Benchmarking essential, but with what and whom?
KPIs essential, but also the purpose of what we do
Tools are essential, extremely useful, as tools…
ISO 17100, EN 15038, ASTM 2575 as to work organisation and
professional working methods
MQM as to Q Evaluation for a common understanding