Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.
Comparison of RECIST 1.0
and 1.1 -
Impact on Data Management
Kevin Shea
Senior Solutions Architect
C3i, Inc.
Disclaimer
•  The views and opinions expressed in the following PowerPoint
slides are those of the individual presenter an...
Objectives
•  Describe RECIST
•  Independent imaging review
•  Manage external imaging data
3
Agenda
•  Background
•  RECIST
– Overview
– Parameters
– RECIST V 1.0 vs. 1.1
•  Independent Review
•  Data Management Con...
Background
•  Oncology clinical trials utilize imaging assessment
as surrogate endpoint
•  Imaging involves variations in ...
RECIST Overview
•  Response Evaluation Criteria In Solid Tumors
•  Establish referenceable, repeatable standards
•  Based ...
RECIST Parameters
•  Serial review – baseline to completion
•  Quantify tumor burden
•  Qualitative assessment of remainin...
Evaluation Process
•  Baseline – key to establish as comparator of
subsequent timepoints
–  Target – Sum of Longest Diamet...
RECIST Lesion Classifications
•  Target – representative of disease, able to
reproducibly measure and track over time
•  N...
RECIST Response Criteria
•  CR – Complete Response
–  Disappearance of all target lesions
•  PR – Partial Response
–  30% ...
RECIST End Points
•  Response
– Timepoint response
– Best overall response
– Confirmation – 4-6 weeks (maybe required)
•  ...
RECIST V 1.0 and 1.1
•  Uni-dimensional measurement
•  Tumor burden based on sum of diameters
•  Lesion classification sch...
RECIST V 1.0 and 1.1
Differences
V 1.0 (2000)
•  Max 10 Target / Max 5
per organ
•  ≥ 10 mm LD (spiral CT)
≥ 20 mm LD (oth...
RECIST V 1.0 and 1.1
Differences
V 1.0 (2000)
•  CR – disappearance all
lesions
•  Targ-PD – SLD ≥ 20% of
nadir
NTarg-PD –...
Central Review of Images
•  Focus on consistency, repeatability
•  Limited reader pool
•  Training and review of cases
•  ...
BICR Process
16
Site and Central Review
Imaging Site
•  Clinical focus
•  Do not generally utilize
RECIST
•  Not blinded
•  Access to all ...
Data Management Considerations
•  RECIST version challenges
•  Site vs. Central Review data
•  Central Review
18
RECIST Version Challenges
•  Impact of migrating to v. 1.1 or maintaining v.1.0
and v. 1.1 studies
•  Target lesions
–  To...
RECIST Version Impact
•  CRF Design
•  Derivation procedures
•  Edit checks
•  Data quality reviews
•  Emphasis on trainin...
Site vs. Central Review
•  Comparison of endpoint results
•  Concordance noted in previous studies
•  Not based on consist...
Site vs. Central Review (2)
•  Develop processes to analyze:
–  Previous study data
–  Consistency of sites with central
•...
Central Review Data
•  Win-Loss Adjudication Rates
•  Intra-Reader Variability
•  Inter-Reader Variability
•  Monitor BICR...
Central Review Data (2)
•  Establish normal levels of variability and
discordance for v. 1.0 and v. 1.1
•  Analyze for var...
Conclusions
•  RECIST 1.1 – attempt to improve and simplify
•  Comparisons between 1.0 and 1.1 data should be closely
moni...
Acknowledgements
I’d like to thank the following people for their
help in preparing this presentation
•  Robert Ford
•  Er...
Upcoming SlideShare
Loading in …5
×

Comparison of RECIST 1.0 and 1.1 - Impact on Data Management

8,288 views

Published on

A review of the two RECIST versions, noting similarities and differences, highlighting the improvements in v.1.1. This information is used to discuss how some of the challenges RECIST presents to data management can be addressed.

  • Be the first to comment

Comparison of RECIST 1.0 and 1.1 - Impact on Data Management

  1. 1. Comparison of RECIST 1.0 and 1.1 - Impact on Data Management Kevin Shea Senior Solutions Architect C3i, Inc.
  2. 2. Disclaimer •  The views and opinions expressed in the following PowerPoint slides are those of the individual presenter and should not be attributed to Drug Information Association, Inc. (“DIA”), its directors, officers, employees, volunteers, members, chapters, councils, Special Interest Area Communities or affiliates, or any organization with which the presenter is employed or affiliated. •  These PowerPoint slides are the intellectual property of the individual presenter and are protected under the copyright laws of the United States of America and other countries. Used by permission. All rights reserved. Drug Information Association, Drug Information Association Inc., DIA and DIA logo are registered trademarks. All other trademarks are the property of their respective owners. 2
  3. 3. Objectives •  Describe RECIST •  Independent imaging review •  Manage external imaging data 3
  4. 4. Agenda •  Background •  RECIST – Overview – Parameters – RECIST V 1.0 vs. 1.1 •  Independent Review •  Data Management Considerations •  Conclusions 4
  5. 5. Background •  Oncology clinical trials utilize imaging assessment as surrogate endpoint •  Imaging involves variations in modality, techniques, and reader assessment, training •  Standardization – variability, repeatability •  RECIST – well-adopted standard •  Data Management processes can be used to monitor assessment data to track quality and safety 5
  6. 6. RECIST Overview •  Response Evaluation Criteria In Solid Tumors •  Establish referenceable, repeatable standards •  Based on WHO criteria (1981) •  Established 2000 (v.1.0), Updated 2009 (v.1.1) •  PII focus, PIII applicability •  Endpoints – ORR, PFS •  Well-adopted in ICLs •  Challenges at AROs and local imaging sites 6
  7. 7. RECIST Parameters •  Serial review – baseline to completion •  Quantify tumor burden •  Qualitative assessment of remaining lesions •  Lesion classification •  Consistent assessment categories •  Associate changes with efficacy 7
  8. 8. Evaluation Process •  Baseline – key to establish as comparator of subsequent timepoints –  Target – Sum of Longest Diameters –  Non-Target – document all other disease •  Post-Baseline –  Target •  Sum Diameters •  Compare to BSL/Prev TPs, Establish nadir –  Non-Target – evaluate for substantial change –  New – Review for presence 8
  9. 9. RECIST Lesion Classifications •  Target – representative of disease, able to reproducibly measure and track over time •  Non-target – all other lesions or sites of disease, tracked qualitatively •  New – post-baseline presence of new disease 9
  10. 10. RECIST Response Criteria •  CR – Complete Response –  Disappearance of all target lesions •  PR – Partial Response –  30% reduction in SLD •  SD – Stable Disease –  Neither response or progression •  PD – Progressive Disease –  20% increase in SLD –  Presence of new lesion 10
  11. 11. RECIST End Points •  Response – Timepoint response – Best overall response – Confirmation – 4-6 weeks (maybe required) •  Progression – Target SLD > 20% of nadir – Non-target – unequivocal progression – Date of progression 11
  12. 12. RECIST V 1.0 and 1.1 •  Uni-dimensional measurement •  Tumor burden based on sum of diameters •  Lesion classification scheme •  Response categories 12 Consistencies
  13. 13. RECIST V 1.0 and 1.1 Differences V 1.0 (2000) •  Max 10 Target / Max 5 per organ •  ≥ 10 mm LD (spiral CT) ≥ 20 mm LD (other) •  Lymph Nodes not specified V 1.1 (2009) •  Max 5 Target / Max 2 per organ •  ≥ 10 mm LD or 2x slice (extranodal) ≥15 mm SAD (nodal) •  Lymph Nodes >10 mm pathological ≥ 15 mm measureable 13
  14. 14. RECIST V 1.0 and 1.1 Differences V 1.0 (2000) •  CR – disappearance all lesions •  Targ-PD – SLD ≥ 20% of nadir NTarg-PD – unequivocal progression •  New – not specifically defined V 1.1 (2009) •  CR – disappearance all extranodal lesions, nodal < 10 mm •  Targ-PD – SoD ≥ 20% and ≥ 5 mm from nadir NTarg-PD – unequivocal progression w/substantial worsening •  New – unequivocal, not based on imaging tech. 14
  15. 15. Central Review of Images •  Focus on consistency, repeatability •  Limited reader pool •  Training and review of cases •  BICR - typical process: dual reader w/ adjudication – Two primary readers – Adjudication for discordance on end points •  Various quality processes incorporated 15
  16. 16. BICR Process 16
  17. 17. Site and Central Review Imaging Site •  Clinical focus •  Do not generally utilize RECIST •  Not blinded •  Access to all clinical data •  Limited protocol training Central Review •  Focus on imaging •  RECIST w/ limited pool of readers •  Blinded •  Limited access to clinical data •  Image Review Charter 17
  18. 18. Data Management Considerations •  RECIST version challenges •  Site vs. Central Review data •  Central Review 18
  19. 19. RECIST Version Challenges •  Impact of migrating to v. 1.1 or maintaining v.1.0 and v. 1.1 studies •  Target lesions –  Total number –  Number per organ •  Lymph nodes •  Sum of Diameters •  Non-target progression •  New Lesions 19
  20. 20. RECIST Version Impact •  CRF Design •  Derivation procedures •  Edit checks •  Data quality reviews •  Emphasis on training and quality control •  Focus on non-target progression and new lesions 20
  21. 21. Site vs. Central Review •  Comparison of endpoint results •  Concordance noted in previous studies •  Not based on consistent techniques •  Intra-study comparisons should be established early 21
  22. 22. Site vs. Central Review (2) •  Develop processes to analyze: –  Previous study data –  Consistency of sites with central •  Distinguish trends •  Establish “normal discordance” rate –  Identify outlier sites •  Outlier sites can be reviewed further –  Re-training –  Imaging technique 22
  23. 23. Central Review Data •  Win-Loss Adjudication Rates •  Intra-Reader Variability •  Inter-Reader Variability •  Monitor BICR discordance and adjudication •  Analyze variability –  Tumor type –  Intervention •  Evaluate quality between RECIST 1.0 and 1.1 studies 23
  24. 24. Central Review Data (2) •  Establish normal levels of variability and discordance for v. 1.0 and v. 1.1 •  Analyze for variables •  Assess for suitability in future studies •  Establish parameters for site and central review data in future studies 24
  25. 25. Conclusions •  RECIST 1.1 – attempt to improve and simplify •  Comparisons between 1.0 and 1.1 data should be closely monitored •  Follow-on studies may remain at v. 1.1 •  Fewer target lesions dictates attention to discordance and variability •  Non-target progression and new lesions should be reviewed for adherence to standard •  Incorporation of PET for confirmation should be considered •  Protocol-specific requirements may drive DM process and QA controls 25
  26. 26. Acknowledgements I’d like to thank the following people for their help in preparing this presentation •  Robert Ford •  Eric Perlman •  Tomomi Dyer 26

×