This document proposes a peer-review process to improve the quality of linguistic validation for clinical outcome assessment instruments. It describes challenges with the current linguistic validation process and proposes conducting a second independent review. A method is presented for auditing translated instruments which involves categorizing issues, evaluating impact on measurement, and resolving any issues found. The goal is to provide a systematic method to identify any translation errors or cultural adaptation problems that could impact outcome measurement. When applied to existing translations, it can help prevent issues in ongoing clinical trials.
Welocalize EAMT 2014 Presentation Assumptions, Expectations and Outliers in P...Welocalize
EAMT Summit Dubrovnik, Croatia, June 2014 presentation Assumptions, Expectations and Outliers in Post-Editing by Lena Marg, Laura Casanellas from the Welocalize Language Tools Team. European Association Machine Translation - Discussion about machine translation and quality processes. Results for analysis of output in Welocalize study by Language Tools Team research. Scoring, evaluations and productivity tests. Quality Evaluation. MT study Based on some of the data shown previously, and from the point of view of consistent results versus outliers. For Human Evaluations of raw MT output, the inter-annotator agreement was consistent in terms of scores (same test set and language) and metrics based on human effort (productivity delta) are less consistent and might include significant variations between individual (same test set and language)
Welocalize EAMT 2014 Presentation Assumptions, Expectations and Outliers in P...Welocalize
EAMT Summit Dubrovnik, Croatia, June 2014 presentation Assumptions, Expectations and Outliers in Post-Editing by Lena Marg, Laura Casanellas from the Welocalize Language Tools Team. European Association Machine Translation - Discussion about machine translation and quality processes. Results for analysis of output in Welocalize study by Language Tools Team research. Scoring, evaluations and productivity tests. Quality Evaluation. MT study Based on some of the data shown previously, and from the point of view of consistent results versus outliers. For Human Evaluations of raw MT output, the inter-annotator agreement was consistent in terms of scores (same test set and language) and metrics based on human effort (productivity delta) are less consistent and might include significant variations between individual (same test set and language)
When the global financial crisis broke out in 2008, G20 Leaders committed to resisting protectionism in all its forms at their 2008 Summit in Washington. At their subsequent summits in London, Pittsburgh, Toronto, Seoul, Cannes, Los Cabos, St Petersburg, Brisbane, Antalya and Hangzhou, they
reaffirmed their pledge and called on WTO, OECD, and UNCTAD to monitor and publicly report on their trade and investment policy measures. The present document is the sixteenth report on investment and investment-related measures made in
response to this call. It has been prepared jointly by the OECD and UNCTAD Secretariats and covers investment policy and investment-related measures taken in the five months between 16 May 2016 and 14 October 2016.
As a leader in site optimization and digital analytics for nearly a decade, Dell has a robust approach to e-commerce optimization. In this session, Vab Dwivedi will discuss how Dell drives a world class user experience for its Consumer, Business and Support visitors through robust testing and targeted personalization. Learn how they leverage both A/B and multivariate testing for a user experience that is constantly being optimized for conversion and revenue. Take away key engagement tips to build your next testing projects.
Linguistic validation is a complex, yet necessary process for many clinical trial translation projects. Not sure what it entails? We've outlined it all for you right here!
Translation and localization of market research surveys for global projects can be a nightmare! Delays, client confidence crises, programming nightmares, late fielding, and queer data are among the risks. I use this presentation to help understand the problems and find efficient solutions for "Going Global".
When the global financial crisis broke out in 2008, G20 Leaders committed to resisting protectionism in all its forms at their 2008 Summit in Washington. At their subsequent summits in London, Pittsburgh, Toronto, Seoul, Cannes, Los Cabos, St Petersburg, Brisbane, Antalya and Hangzhou, they
reaffirmed their pledge and called on WTO, OECD, and UNCTAD to monitor and publicly report on their trade and investment policy measures. The present document is the sixteenth report on investment and investment-related measures made in
response to this call. It has been prepared jointly by the OECD and UNCTAD Secretariats and covers investment policy and investment-related measures taken in the five months between 16 May 2016 and 14 October 2016.
As a leader in site optimization and digital analytics for nearly a decade, Dell has a robust approach to e-commerce optimization. In this session, Vab Dwivedi will discuss how Dell drives a world class user experience for its Consumer, Business and Support visitors through robust testing and targeted personalization. Learn how they leverage both A/B and multivariate testing for a user experience that is constantly being optimized for conversion and revenue. Take away key engagement tips to build your next testing projects.
Linguistic validation is a complex, yet necessary process for many clinical trial translation projects. Not sure what it entails? We've outlined it all for you right here!
Translation and localization of market research surveys for global projects can be a nightmare! Delays, client confidence crises, programming nightmares, late fielding, and queer data are among the risks. I use this presentation to help understand the problems and find efficient solutions for "Going Global".
Welocalize Throughputs and Post-Editing Productivity Webinar Laura CasanellasWelocalize
Welocalize language tools expert Laura Casanellas details key topics related to human translation and machine translation post-editing, production, throughputs and measuring success. This is the presentation used in a recent online webinar you can find at http://www.welocalize.com/wemt/wemt-webinars/
Topics for this recorded webinar include:
- Defining throughputs for human translation and machine translation post-editing
- How to accurately compare individual throughputs for translating and post-editing
- What are the most common deviations in throughputs
- How to spot progress and performance improvement
- Who really benefits from post-editing
Applying the Peer Review Process to the Development of Learning AssessmentsExamSoft
Faculty-generated exams have high-stakes implications for measuring student learning outcomes. However, these internal high-stakes assessments are often only seen by the faculty author and the students. Moreover, the test item writers may have had very little training in drafting test items. Since only the faculty author sees these items, they may never be reviewed for content, validity, reliability, readability, or even quality of writing. This session will explore what can happen when we apply the value of the scholarly peer review process to test item development. Included will be suggestions for faculty development sessions, sample exam blueprints for ensuring appropriate distribution of exam content, as well as specific examples of peer-reviewed questions.
A brief summary of the Test Methods and Test Facets affecting testing performance (Source: Fundamental Considerations in Language Testing - Lyle F. Bachman)
Similar to Auditing_COAs_TranslationSilviaZaragoza (20)
1. Peer-Review process to improve the Quality of
Linguistic Validation Process of COAs Instruments for
Clinical Trials
S. Zaragoza Domingo
Neuropsychological Research Organzation s.l.
Presented at COMET IV Talks
Amsterdam, 10-11 November 2016
2. Disclosures
The author wants to acknowledge Cogstate Ltd. for the
information provided and support on the production of this
research work.
The author and collaborators report conflicts of interests as
were Cogstate employees at the time of the analysis.
3. Content
1. Challenges of Linguistic Validation Process
2. Usual LV Review Process
3. Proposal of a Second Review
4. Steps for Auditing Translated COA Instruments
5. Applications
6. Concluding Remarks
4. 1. Challenges of Linguistic Validation (LV)
Process
Abbreviation Name Reporter Type
PRO Patient Reported
Outcome
Patient Scales, Inventories,
etc..
ClinRO Clinician Reported
Outcome
Clinician Assessment Scales,
Interviews
ProxyRO Proxy Reported
Outcome
Relatives,
Caregiver
Scales, Interviews
PerfRO Performance Reported
Outcome
Clinician Cognitive Testing
(paper&pencil,
computerized)
Type of COAs and Abbreviations
5. 1. Challenges for LV
Time Pressure
Budget Limitations
Availability of Local Experts
Availability of Concept List
Agreement on the steps to
be done (i.e. debriefings,
pilot testing, etc..)
Different equivalence criteria
Classification of primary vs
secondary clinical endpoints
PROs, ClinPROs, ProxyPROs,
PerfROs
Precision Concerns on
Measurement Validity
COAs
Local Versions
6. 2. Usual LV Process – Concepts
Linguistic versus Psychometric validation
LV Steps
Linguistic
Validation
Psychometric
Validation
• Translation & Cultural
Adaptation
X X
• Cognitive Debreifing Limited to PROs,
ProxyRO
X
• Piot Testing None Depending on authors
• Psychometric Propierties
Internal consistency, Test-Retest
Reliability, External Validity, etc..
None X
7. 2. Usual LV Process
Standard LV Process Diagram* usually from English to another language.
*Reproduced with permission of Authors at MAPI Institute.
8. 2. Usual LV Process
Standard LV Process Diagram* usually from English to another
language.
*Reproduced with permission of Authors at MAPI Institute.
First
Reviewer
involved
9. 2. Usual Review Process – Deliverables LV
Final Instrument
Translated,
Formatted and
Customized
i.e. Scale/Test
The review work varies depending on the process
decided beforehand.
Certification of
Translation
Reconciliation Report
(also Back Translation Report -BT)
Documentation Related to the LV Process
In addition, other materials to archive for future reference:
• Concept Lists
• List of generated stimuli (PerfRO)
• COA instrument Manual and instructions
• Background documentation as related published papers
• Cognitive Debriefing Related Materials
• Scripts
• Results
• Final LV report (may include part of the additional documentation)
10. 2. Usual Review Process
Are translations good enough
for clinical research?
From: Franz Xaver Messerschmidt –Character Heads
11. Most common reported issues
Translations are too literal, unnatural sentence structure,
Translation does not fully adhere to the Concept Definition,
Missing sentence segments or even items,
Inaccuracy on the use of local terminology,
Inadequacy of stimuli used in cognitive testing (verbal, visual,
etc.),
Insufficient language adaptation to study population (age
ranges or educational level),
Confusing translation of instructions to subjects/raters.
2. Usual LV Process - Issues
12. 3. Proposal of a Second Review – Justification
Reviewers across countries can have different backgrounds
depending on training nuances for every speciality.
Psychiatrists, Neurologists, Psychologists, Neuropsychologist,
Speech Therapists....
Some countries are broad enough to have language
variations within the same linguistic area.
Language and terminology needs to be standard enough
to be valid for all users within the country.
Limitations of the First Review
13. 3. Proposal of a Second Review – Justification
Actually ....
Sometimes, an additional review is conducted once the translation is
completed ...
Not independent
Performed by the customer contracting the translation
Pharma companies or local sponsor affiliates
Substantial modifications can be suggested following this review while other not accepted by
the translation specialist.
Non systematic
“Last minute” reviews basically based on linguistic equivalence rather than
impact over outcome measurement.
Overall a more systematic method is needed ....
14. Final Instrument
Translated,
Formatted and
Customized
i.e. Scale/Test
The review work varies depending on the process
decided beforehand.
Certification of
Translation
Reconciliation Report
(also Back Translation Report -BT)
3. Proposal of a Second Review –Implementation
Audit or Second Review
By an independent local therapeutic area
specialist or COA tools expert
15. 3. Proposal of a Second Review – Metrics
Created during the audit of “reconciliation reports” of 10 COAs
instruments used in Clinical Trials, that where translated into 18
languages/dialects.
Observations made by reviewers during:
Review of Back Translation by staff - native English US language,
Review of Forward and Back Translations by local bilingual consultants,
Findings from site raters at study start up,
All observations where analyzed, post hoc, using the new proposed
metrics.
16. 4. Steps for Auditing Translated COAs
QUALITY ANALYSIS OF COAS INSTRUMENT TRANSLATION
QUALITATIVE ANALYSIS
Categorization of findings observed at
different steps of the LV process into a 5
level types.
1st Step
IDENTIFICATION &
CATEGORIZATION
(A – E)
EVALUATION OF IMPACT
Analysis of each finding regarding impact
on endpoint measurement by allocating
a score to range impact severity from 1
to 5.
2nd Step
IMPACT EVALUATION
(1-5)
RESOLUTION AND FINAL EDITING
Final decisions to edit or not the COA
and issue the final version and
certification.
3rd Step
FINAL
RESOLUTION
17. 4. Steps for Auditing Translated COAs
QUALITATIVE ANALYSIS
A. Inadequate Formatting: Error in the formatting of texts or sections.
B. Grammar Error: A word / sentence has a different meaning after an inadequate sentence
composition. Inaccurate translations or too literal translations are included also in this
category.
C. Word/Sentence Modified: One word is changed and it results in a different meaning of the
sentence.
D. Omission or wrong word addition: Modifications that that results in some information
lost in the local translation.
E. Inadequate Cultural Adaptation: Either full or partial sentence has to be further adapted
to the local cultural context.
1st Step
IDENTIFICATION &
CATEGORIZATION
(A – E)
QUALITY ANALYSIS OF COAS INSTRUMENT TRANSLATION
18. 4. Steps for Auditing Translated COAs
EXAMPLES FOR QUALITATIVE ANALYSIS STEP – TYPE OF FINDING
A. Inadequate Formatting: WMS-Logical Memory (PerfRO) French-France version, some
text to be memorize not in bold letter. Affecting results on memory performance.
B. Grammar Error: In the MCI Inventory (ClinPRO) in Spanish-Argentina, it is translated
“diarios” as “diaries” which is a language false friend because “diarios” refers to a
newspaper and not a memory aid tool.
C. Word/Sentence Modified: In the Digit Symbol Substitution (PerfRO) Polish-Poland
version the sentence for instructions to score the test, has a different meaning because
impacting the “number of errors” count. This kind of mistakes will have an impact how
the performance is scored that will be different than in other countries.
D. Omission or wrong word addition: In the FAQ (ClinRO) Romanian Romania a word is
missing in a question, erasing the component that will produce a Yes/No answer.
E. Inadequate Cultural Adaptation: In the RUD inventory (PRO and ProxyRO) Romanian
from Romania, it is used a word similar to hospice (implies the patient is terminal) and in
the source English US the item do not has this implication.
WMS Wescheler Memory Scale, MCI Mild Cognitive Impairement Inventory, FAQ Functional Assessment
Questionnaire, RUD Resources Utiilization in Demencia
19. Evaluation of Impact
1. None or low impact: Impact is not expected on measurement but in order to
concur with other items, tests or evaluations, it needs to be changed.
2. Minor or Mild Impact: The item is almost equivalent as the source English, but
changes can be introduced order to improve understanding or to be more
specific.
3. Moderate Impact: Impact on measurement because, an important segment of
the target population, can potentially make a wrong interpretation of the item.
4. Significant or high impact: Significant because the item is not clear so a half
of the population can understand it in one way and the other half in the other.
5. Critical or extreme impact: The item has an important mistake (so everyone
will answer it wrongly) or it is talking about another factor or about a factor not
evaluated in the test.
4. Steps for Auditing Translated COAs
Scores 3 to 5 are considered to represent the most relevant impact levels requiring resolution.
2nd Step
IMPACT EVALUATION
(1-5)
QUALITY ANALYSIS OF COAS INSTRUMENT TRANSLATION
20. 4. Steps for Auditing Translated COA Instuments - Results
A total of 200 Reconciliation Reports where analyzed
217 findings as commented by consultants
44% with potential impact on COA measurement
27% rated as significant
All the findings were resolved for the final local versions.
21. 4. Steps for Auditing Translated COA Instruments
3rd Step
FINAL
RESOLUTION
QUALITY ANALYSIS OF COAS INSTRUMENT TRANSLATION
Editing Resolution and Editing
Final decision on impact on COA measurement and define actions.
Any editing involves:
LV process for the edit (i.e BT and reconciliation)
Authors/publishers must be informed
Approval for modifications must be obtained
A new translation version needs to be issued
Update of previous Certification of Translation (adding step and date)
22. Outcomes of COA Translation Audits:
When just one language is audited:
Total numbe of observed issues by category (# A, # B, # C, # D, # E)
Total number of issues with impact (impact socre ≥ 3)
Description of impacting issues and proposed solutions
Appraisal with the instrument author/s or author of the translation
When more than one lenguaje are audited:
Report per language as above
# and % of issues over total segments one each translation
Look up existing patterns of findings
• Some languages can accumulate more issues than others
• COAs length can be also a risk factor for issues on translation
• Type of COAs (PerfRO, ClinRO, PRO&ProxyRO)
Potential Impact summary i.e. # and percentage of issues per level of impact.
4. Steps for Auditing Translated COA Instruments
23. 5. Applications
For new translations*
An audit can be useful to rate measurement precision and to prevent issues on clinical
outcomes measurement.
For existing translations
From past projects or obtained from publishers
For ongoing trials when issues are reported
To confirm or identify issues that can impact on measurement
*Driven by any stakeholder like a pharma company, CRO, translation agency, publisher, research group, etc.
COAs
Instrument
LocalVersions
24. 6. Concluding Remarks
The proposed method offers a systematic way to review local versions of
assessment instruments.
It includes:
An independent audit/review
A method to cathegorize issues and to assesso its expected impact on the
evaluation.
It provides a common language for adaptation quality, among involved
stakeholders.
Specialized translation companies
Investigators
Raters
Sponsors and local affiliates
Health Agencies
Research Centers
25. És el destí de l’home, la seva mà no pot executar mai
amb rigor màxim allò que idealment crea amb
l’esperit. (...) Però l’home té molts recursos i pot
determinar en quin grau la seva creació difereix de
la perfecció.
Denis Guedj “La Mesura del Món” (“Le Métre du
Monde”)
From: Franz Xaver Messerschmidt –Character Heads
26. References
Committee for Medicinal Products for Human use (CHMP). Reflection
Paper on the Regulatory Guidance for the use of Health Related Quality
of Life (HRQL) Measures in the Evaluation of Medicinal Products, 2005,
Doc. Ref. EMEA/CHMP/EWP/139391/2004.
US Food and Drug Administration. Guidance for Industry: Patient-
Reported Outcome Measures: Use in Medical Product Development to
Support Labeling Claims. December 2009. Available from:
http://www.fda.gov/BiologicsBloodVaccines/GuidanceComplianceRegul
atoryInformation/default.htm
Wild, D., et al. "Principles of Good Practice for the Translation and
Cultural Adaptation Process for Patient-Reported Outcomes (PRO)
Measures: report of the ISPOR Task Force for Translation and Cultural
Adaptation." Value Health 8.2 (2005): 94-104.
Acquadro C., Conwat K., Giroudet C. and Mear Isabelle Linguistic
Validation Manual for Health Outcome Assessment. 2012 Mapi Institute
Ed.
27. Please cite this work as:
Zaragoza S. , Krishna V., Royuela O. Peer-Review process to improve the quality of
Linguistic Validation process of COAs instruments for clinical trials. COMET IV Talks.
Meeting Conference Proceedings. Journal of Evidence Based Medicine [Âhead of
Publishing].
S. Zaragoza Domingo
Neurpsychological Research Organzation s.l.
BARCELONA
szaragoza@psyncro.net
Vanitha Krishna
MedAvante, Inc. ,
Hamilton, NJ, USA
Oscar Royuela
Neuropsychological Research
Organization s.l
Barcelona
Thank you!