Imaging trials introduce novel and unique sources of data.
Blinded Independent Central Review is primary source of data.
- Utilization of standard assessment criteria
- Multiple datasets for given patient/visit
- Key aspects of CDM for RECIST trials:
>>RECIST version
>>Site vs. Central review data
>>Central review performance
Data Challenges in Imaging Trials – Image Review Data
1. Data Challenges in Imaging Trials –
Image Review Data
03April 2013
Kevin Shea
Senior Director Clinical Solutions
C3i, Inc.
kshea@c3i-inc.com
610-772-5726
6. Central Review of Images
Imaging Core Lab
»Focus on objectivity and precision
»No role in clinical care of patient
»Limited reader pool
»Training and review of cases – focus on consistency
»BICR – ideal process: dual reader w/ adjudication
»Various quality processes incorporated to enhance
consistency and ensure cross-site view of data
13
8. Comparison - Site and Central Review
Imaging Site
» Clinical focus
» Do not generally utilize
RECIST
» Variety of readers
» Not blinded
» Access to all clinical data
» Limited protocol training
Central Review
» Focus on imaging
» RECIST w/ limited pool of
readers
» Blinded
» Limited access to clinical data
» Image Review Charter
15
9. RECIST Overview
»Response Evaluation Criteria In Solid Tumors
»Based on WHO criteria (1981)
»Established 2000 (v.1.0), Updated 2009 (v.1.1)
»PII focus, PIII applicability
»Endpoints – ORR, PFS, TTP
»Well-adopted in Imaging Core Labs
»Challenges at AROs and local imaging sites
4
10. RECIST Parameters
»Serial review – baseline to completion
»Quantify representative tumor burden
»Qualitative assessment of remaining lesions
»Lesion classification
»Consistent assessment categories
»Associate changes in tumor burden with efficacy
Response (may require confirmation)
− Time point response
− Best overall response
Progression – Date of progression
5
11. RECIST Lesion Classifications
»Target Lesions
representative of disease
able to reproducibly measure and track over time
»Non-target
all other lesions or sites of disease
tracked qualitatively
»New – post-baseline presence of new disease
6
12. RECIST Target Lesion Selection Criteria
»Uni-dimensional measurement
»Number
Maximum of Five Target Lesions
No more than two per organ
»Length
≥ 10 mm LD or 2x slice (extra nodal)
≥15 mm Short Axis Diameter (nodal)
»Lymph Nodes
Must be >10 mm to be considered pathological
Must be ≥ 15 mm to be measureable
11
13. Evaluation Process
»Baseline – establish initial tumor burden, comparator of
subsequent time points
Target Lesions – Sum of Longest Diameters (SLD)
Non-Target Lesions – document all other disease
»Post-Baseline
Target
− Sum Diameters
− Compare to BSL/Prev TPs, Establish nadir
Non-Target – evaluate for substantial change
New – Review for presence
8
14. RECIST Response Criteria
»CR – Complete Response
Disappearance all extra nodal lesions, nodal ≤ 10 mm
»PR – Partial Response
30% reduction in Tumor Burden
»SD – Stable Disease
Neither response (CR/PR) or progression (PD)
»PD – Progressive Disease
Target-PD – SoD ≥ 20% and ≥ 5 mm from nadir
Non-Target-PD – unequivocal progression
Presence of new lesion
7
17. RECIST Version Challenges
»Migrating to v. 1.1
»Maintaining v.1.0 and v. 1.1 studies
»Target lesions
Total number
Number per organ
»Lymph nodes
»Sum of Diameters
»Non-target progression
»New Lesions
17
18. RECIST Version Impact
»CRF Design
»Derivation procedures
»Edit checks
»Data quality reviews
»Emphasis on training and quality control
»Focus on non-target progression and new lesions
18
19. Site vs. Central Review
»Comparison of endpoint results
»Concordance noted in previous studies
»Correlation not dependent on techniques
»Intra-study comparisons should be established early
»Track throughout study w/ focus on key events
Soft-locks/data safety monitoring
Prior to locking a site
Prior to final DB lock
19
20. Site vs. Central Review
»Develop processes to analyze:
Previous study data
Consistency of sites with central
− Distinguish trends
− Establish “normal discordance” rate
Identify outlier sites
»Outlier sites can be reviewed further
Re-training
Imaging technique
CMO reviews
20
21. Evaluation of Central Review Data
»Monitor BICR discordance and adjudication
Win-Loss Adjudication Rates
Intra-Reader Variability
Inter-Reader Variability
»Analyze variability
Tumor type
Intervention
»Evaluate metrics between RECIST 1.0 and 1.1 studies
21
22. Management of Central Review Data
»Establish normal levels of variability and discordance for v. 1.0
and v. 1.1
»Analyze for variables
»Assess for suitability in future studies
»Establish parameters for site and central review data based on
RECIST version
22
23. Conclusions
»Imaging trials introduce new sources of data
»Central review of imaging data generates congruent data sets
for a given patient/visit but must be analyzed against local
imaging site review
»Management and analysis of data in RECIST trials should
consider:
Version of RECIST utilized
Site vs. Central review data
Central review performance
Happy to discuss:
Kevin Shea, C3i, Inc.
kshea@c3i-inc.com or +1 610-772-5726
23