The document summarizes a study comparing different quality assurance methods for stereotactic lung radiotherapy plans. Point dose calculations, forward calculations using a collapsed cone algorithm, and backprojected 3D dose reconstruction were compared to treatment planning system doses. Results showed that point dose calculations had the largest differences of up to 6%, while forward and backprojected doses agreed with treatment plans to within 3% for most plans. Backprojected doses also agreed well with forward calculations, indicating 3D dose reconstruction could effectively identify dose discrepancies not found through conventional methods.
Dynamic Signature Verification (DSV) is unique among other biometric authentication
technologies as there is no clearly defined method of creating a forgery. This research examined the
perception of the signature to the forger (how easy an individual perceives the signature to be forged),
and whether there were any characteristics common among the groupings of difficulty. The dynamic
variables of the signature were then examined to establish which statistical variables were susceptible
to forgery using forensic tools. Overall, it seems that both the genuine and impostor groups do not
single out a specific dynamic trait within their judgment of an “easy” or “difficult” signature.
Furthermore, it also shows that individuals have difficulty in assigning a speed to their signature – i.e.
the perception of speed is different for each individual (both genuine and impostor), and additionally,
both the impostors and genuine users ranked their signatures differently when asked about the
perceived level of difficulty.
Professor Harrison Bai, Artificial Intelligence Applications in Radiology_mHe...Levi Shapiro
Artificial Intelligence Applications in Radiology, presentation by Dr Harrison Bai, Assistant Professor of Diagnostic Imaging, Warren Alpert Medical School, Brown University. His research interests focus on AI, machine learning, and computer vision as applied to medical image analysis. Dr Bai is an associate editor for the journal Radiology: Artificial Intelligence and is currently a principal investigator for an RSNA Research Scholar grant and an NIH grant. The AI Radiology Lab has various areas of work including COVID-19; Treatment response assessment on imaging (brain, TACE, lung, colorectal); Rapid diagnosis of large-vessel ischemic stroke, patient selection and outcome prediction; Tumor characterization on imaging; Infrastructure development; Federated learning; Image registration (CT-guided tumor ablation); Radiology reports natural language processing. The AI pipeline includes DIANA system, Diagnosis model, severity model and progression model across various automated features and the value proposition. One Technique for dealing with missing sequence and imaging artifact- Sequence dropout. Human-in-the-loop AI. In the short- to mid-term, the utilization of AI needs to be combined with human intervention and supervision. Active learning strategy – annotation. Treatment response evaluation on imaging. Automatic quality estimation to flag the failed cases for humans to review and/or edit. Human in the loop annotation. Automatic quality estimation. Federated learning. Semi-supervised and unsupervised learning. AWS NVIDIA Clara Train SDK using TensorFlow 1.14. Annotations vary across imaging sites. Share weights without sharing data. Domain shift – distribution difference between source data and target data leading to performance degradation.
Dynamic Signature Verification (DSV) is unique among other biometric authentication
technologies as there is no clearly defined method of creating a forgery. This research examined the
perception of the signature to the forger (how easy an individual perceives the signature to be forged),
and whether there were any characteristics common among the groupings of difficulty. The dynamic
variables of the signature were then examined to establish which statistical variables were susceptible
to forgery using forensic tools. Overall, it seems that both the genuine and impostor groups do not
single out a specific dynamic trait within their judgment of an “easy” or “difficult” signature.
Furthermore, it also shows that individuals have difficulty in assigning a speed to their signature – i.e.
the perception of speed is different for each individual (both genuine and impostor), and additionally,
both the impostors and genuine users ranked their signatures differently when asked about the
perceived level of difficulty.
Professor Harrison Bai, Artificial Intelligence Applications in Radiology_mHe...Levi Shapiro
Artificial Intelligence Applications in Radiology, presentation by Dr Harrison Bai, Assistant Professor of Diagnostic Imaging, Warren Alpert Medical School, Brown University. His research interests focus on AI, machine learning, and computer vision as applied to medical image analysis. Dr Bai is an associate editor for the journal Radiology: Artificial Intelligence and is currently a principal investigator for an RSNA Research Scholar grant and an NIH grant. The AI Radiology Lab has various areas of work including COVID-19; Treatment response assessment on imaging (brain, TACE, lung, colorectal); Rapid diagnosis of large-vessel ischemic stroke, patient selection and outcome prediction; Tumor characterization on imaging; Infrastructure development; Federated learning; Image registration (CT-guided tumor ablation); Radiology reports natural language processing. The AI pipeline includes DIANA system, Diagnosis model, severity model and progression model across various automated features and the value proposition. One Technique for dealing with missing sequence and imaging artifact- Sequence dropout. Human-in-the-loop AI. In the short- to mid-term, the utilization of AI needs to be combined with human intervention and supervision. Active learning strategy – annotation. Treatment response evaluation on imaging. Automatic quality estimation to flag the failed cases for humans to review and/or edit. Human in the loop annotation. Automatic quality estimation. Federated learning. Semi-supervised and unsupervised learning. AWS NVIDIA Clara Train SDK using TensorFlow 1.14. Annotations vary across imaging sites. Share weights without sharing data. Domain shift – distribution difference between source data and target data leading to performance degradation.
1Big Data Analytics forHealthcareChandan K. ReddyD.docxaulasnilda
1
Big Data Analytics for
Healthcare
Chandan K. Reddy
Department of Computer Science
Wayne State University
Jimeng Sun
Healthcare Analytics Department
IBM TJ Watson Research Center
2Jimeng Sun, Large-scale Healthcare Analytics
Healthcare Analytics using Electronic Health Records (EHR)
Old way: Data are expensive and small
– Input data are from clinical trials, which is small
and costly
– Modeling effort is small since the data is limited
• A single model can still take months
EHR era: Data are cheap and large
– Broader patient population
– Noisy data
– Heterogeneous data
– Diverse scale
– Complex use cases
3Jimeng Sun, Large-scale Healthcare Analytics
Heterogeneous Medical Data
DiagnosisDiagnosis
MedicationMedication
LabLab
Clinical
notes
Clinical
notes
ImagesImages
Genetic
data
Genetic
data
4Jimeng Sun, Large-scale Healthcare Analytics
Challenges of Healthcare AnalyticsScalability ChallengesChallenges in Healthcare Analytics
Collaboration across domains
Analytic platform
Intuitive results
Scalable computation
5
PARALLEL MODEL BUILDING
6Jimeng Sun, Large-scale Healthcare Analytics
Motivation – Predictive modeling using EHR is growing
Need for scalable predictive modeling platforms/systems due to increased
computational requirements from:
– Processing EHR data (due to volume, variability, and heterogeneity)
– Building accurate models
– Building clinically meaningful models
– Validating models for accuracy and generalizability
Explosion in
interest
7Jimeng Sun, Large-scale Healthcare Analytics
What does it take to develop a predictive model using EHR?
Marina: IBM
Analytics Consultant
1
2
3
4
5
Within 3 months, we need to
1. understand business case
2. obtain the data
3. prepare the data
4. develop predictive models
5. deliver the final model
David Gotz, Harry Starvropoulos, Jimeng Sun, Fei Wang.
ICDA: A Platform for Intelligent Care Delivery Analytics, AMIA 2012
8Jimeng Sun, Large-scale Healthcare Analytics
A Generalized Predictive Modeling Pipeline
Cohort Construction: Find an appropriate set of patients with the specified
target condition and a corresponding set of control patients without the
condition.
Feature Construction: Compute a feature vector representation for each
patient based on the patient’s EHR data.
Cross Validation: Partition the data into complementary subsets for use in
model training and validation testing.
Feature Selection: Rank the input features and select a subset of relevant
features for use in the model.
Classification: The training and evaluation of a model for a specific classifier.
Output: Clean up intermediate files and to put results into their final locations.
Model specification
9Jimeng Sun, Large-scale Healthcare Analytics
Cohort Construction
A
ll
pa
tie
nt
s
D1
Disease Target samples
D1 Hypertension control 5000
D2 Heart failure onset 33K
D3 Hypertension diagnosis 300K
Cases
Controls
D3
D2
10Jimeng Sun, Large- ...
Multivariate sample similarity measure for feature selection with a resemblan...IJECEIAES
Feature selection improves the classification performance of machine learning models. It also identifies the important features and eliminates those with little significance. Furthermore, feature selection reduces the dimensionality of training and testing data points. This study proposes a feature selection method that uses a multivariate sample similarity measure. The method selects features with significant contributions using a machine-learning model. The multivariate sample similarity measure is evaluated using the University of California, Irvine heart disease dataset and compared with existing feature selection methods. The multivariate sample similarity measure is evaluated with metrics such as minimum subset selected, accuracy, F1-score, and area under the curve (AUC). The results show that the proposed method is able to diagnose chest pain, thallium scan, and major vessels scanned using X-rays with a high capability to distinguish between healthy and heart disease patients with a 99.6% accuracy.
Extending A Trial’s Design Case Studies Of Dealing With Study Design IssuesnQuery
About the webinar
As trials increase in complexity and scope, there is a requirement for trial designs to reflect this.
From dealing with non-proportional hazards in survival analysis to dealing with cluster randomization, we examine how to deal with study design issues of complex trials.
In this free webinar, you will learn about:
Dealing with study design issues
Practical worked examples of
Non-proportional Hazards
Cluster Randomization
Three Armed Trials
Non-proportional Hazards
Non-proportional hazards and complex survival curves have become of increasing interest, due to being commonly seen in immunotherapy development. This has led to interest in assessing the robustness of standard methods and alternative methods that better adapt to deviations.
In this webinar, we look at methods proposed for complex survival curves and the weighted log-rank test as a candidate model to deal with a delayed survival effect.
Cluster Randomization
Cluster-randomized designs are often adopted when there is a high risk of contamination if cluster members were randomized individually. Stepped-wedge designs are useful in cases where it is difficult to apply a particular treatment to half of the clusters at the same time.
In this webinar, we introduce cluster randomization and stepped-wedge designs to provide an insight into the requirements of more complex randomization schedules.
Three Armed Trials
Non-inferiority testing is a common hypothesis test in the development of generic medicine and medical devices. The most common design compares the proposed non-inferior treatment to the standard treatment alone but this leaves uncertain if the treatment effect is the same as from previous studies. This “assay sensitivity” problem can be resolved by using a three arm trial which includes placebo alongside the new and reference treatments for direct comparison.
In this webinar we show a complete testing approach to this gold standard design and how to find the appropriate allocation and sample size for this study.
Duration - 60 minutes
Speaker: Ronan Fitzpatrick, Head of Statistics, Statsols
1Big Data Analytics forHealthcareChandan K. ReddyD.docxaulasnilda
1
Big Data Analytics for
Healthcare
Chandan K. Reddy
Department of Computer Science
Wayne State University
Jimeng Sun
Healthcare Analytics Department
IBM TJ Watson Research Center
2Jimeng Sun, Large-scale Healthcare Analytics
Healthcare Analytics using Electronic Health Records (EHR)
Old way: Data are expensive and small
– Input data are from clinical trials, which is small
and costly
– Modeling effort is small since the data is limited
• A single model can still take months
EHR era: Data are cheap and large
– Broader patient population
– Noisy data
– Heterogeneous data
– Diverse scale
– Complex use cases
3Jimeng Sun, Large-scale Healthcare Analytics
Heterogeneous Medical Data
DiagnosisDiagnosis
MedicationMedication
LabLab
Clinical
notes
Clinical
notes
ImagesImages
Genetic
data
Genetic
data
4Jimeng Sun, Large-scale Healthcare Analytics
Challenges of Healthcare AnalyticsScalability ChallengesChallenges in Healthcare Analytics
Collaboration across domains
Analytic platform
Intuitive results
Scalable computation
5
PARALLEL MODEL BUILDING
6Jimeng Sun, Large-scale Healthcare Analytics
Motivation – Predictive modeling using EHR is growing
Need for scalable predictive modeling platforms/systems due to increased
computational requirements from:
– Processing EHR data (due to volume, variability, and heterogeneity)
– Building accurate models
– Building clinically meaningful models
– Validating models for accuracy and generalizability
Explosion in
interest
7Jimeng Sun, Large-scale Healthcare Analytics
What does it take to develop a predictive model using EHR?
Marina: IBM
Analytics Consultant
1
2
3
4
5
Within 3 months, we need to
1. understand business case
2. obtain the data
3. prepare the data
4. develop predictive models
5. deliver the final model
David Gotz, Harry Starvropoulos, Jimeng Sun, Fei Wang.
ICDA: A Platform for Intelligent Care Delivery Analytics, AMIA 2012
8Jimeng Sun, Large-scale Healthcare Analytics
A Generalized Predictive Modeling Pipeline
Cohort Construction: Find an appropriate set of patients with the specified
target condition and a corresponding set of control patients without the
condition.
Feature Construction: Compute a feature vector representation for each
patient based on the patient’s EHR data.
Cross Validation: Partition the data into complementary subsets for use in
model training and validation testing.
Feature Selection: Rank the input features and select a subset of relevant
features for use in the model.
Classification: The training and evaluation of a model for a specific classifier.
Output: Clean up intermediate files and to put results into their final locations.
Model specification
9Jimeng Sun, Large-scale Healthcare Analytics
Cohort Construction
A
ll
pa
tie
nt
s
D1
Disease Target samples
D1 Hypertension control 5000
D2 Heart failure onset 33K
D3 Hypertension diagnosis 300K
Cases
Controls
D3
D2
10Jimeng Sun, Large- ...
Multivariate sample similarity measure for feature selection with a resemblan...IJECEIAES
Feature selection improves the classification performance of machine learning models. It also identifies the important features and eliminates those with little significance. Furthermore, feature selection reduces the dimensionality of training and testing data points. This study proposes a feature selection method that uses a multivariate sample similarity measure. The method selects features with significant contributions using a machine-learning model. The multivariate sample similarity measure is evaluated using the University of California, Irvine heart disease dataset and compared with existing feature selection methods. The multivariate sample similarity measure is evaluated with metrics such as minimum subset selected, accuracy, F1-score, and area under the curve (AUC). The results show that the proposed method is able to diagnose chest pain, thallium scan, and major vessels scanned using X-rays with a high capability to distinguish between healthy and heart disease patients with a 99.6% accuracy.
Extending A Trial’s Design Case Studies Of Dealing With Study Design IssuesnQuery
About the webinar
As trials increase in complexity and scope, there is a requirement for trial designs to reflect this.
From dealing with non-proportional hazards in survival analysis to dealing with cluster randomization, we examine how to deal with study design issues of complex trials.
In this free webinar, you will learn about:
Dealing with study design issues
Practical worked examples of
Non-proportional Hazards
Cluster Randomization
Three Armed Trials
Non-proportional Hazards
Non-proportional hazards and complex survival curves have become of increasing interest, due to being commonly seen in immunotherapy development. This has led to interest in assessing the robustness of standard methods and alternative methods that better adapt to deviations.
In this webinar, we look at methods proposed for complex survival curves and the weighted log-rank test as a candidate model to deal with a delayed survival effect.
Cluster Randomization
Cluster-randomized designs are often adopted when there is a high risk of contamination if cluster members were randomized individually. Stepped-wedge designs are useful in cases where it is difficult to apply a particular treatment to half of the clusters at the same time.
In this webinar, we introduce cluster randomization and stepped-wedge designs to provide an insight into the requirements of more complex randomization schedules.
Three Armed Trials
Non-inferiority testing is a common hypothesis test in the development of generic medicine and medical devices. The most common design compares the proposed non-inferior treatment to the standard treatment alone but this leaves uncertain if the treatment effect is the same as from previous studies. This “assay sensitivity” problem can be resolved by using a three arm trial which includes placebo alongside the new and reference treatments for direct comparison.
In this webinar we show a complete testing approach to this gold standard design and how to find the appropriate allocation and sample size for this study.
Duration - 60 minutes
Speaker: Ronan Fitzpatrick, Head of Statistics, Statsols
Extending A Trial’s Design Case Studies Of Dealing With Study Design Issues
Ohio River Valley Spring 2011
1. Quality Assurance Utilizing 3D Dose
Reconstruction for Stereotactic Lung Radiotherapy
James Durgin, Michael Weldon,
Nilendu Gupta
Ohio River Valley AAPM
Spring Educational Symposium
March 5, 2011
2. Overview of Lung SBRT Program
OSU Experience Research Project
Began in 2008 41 plans
53 lesions, 43 patients Both recurrences in
Mean/Mode Rx: 9GyX5 data set
1 biopsy-proven Non-IMRT
recurrence No wedges
1 imaging-based All heterogeneous
recurrence calculations using AAA
Low toxicity profile in Eclipse
6MV Siemens Oncor
accelerator
The Ohio State University Comprehensive Cancer Center –
Arthur G. James Cancer Hospital and Richard J. Solove
Research Institute 2
3. Compass Overview
Plan Data
Backprojected Measurements
Forward Calculated Dose
The Ohio State University Comprehensive Cancer Center –
Arthur G. James Cancer Hospital and Richard J. Solove
Research Institute 3
5. Clinical Challenges
Point calculations are
less than ideal
Inhomogeneities
Scatter
Small field sizes
Detector arrays are
calculated for
homogenous materials
Effect of multiple entry
points unknown
The Ohio State University Comprehensive Cancer Center –
Arthur G. James Cancer Hospital and Richard J. Solove
Research Institute 5
7. Point Calculation Analysis
Accessed point calculations in patient chart
RadCalc software
Utilized equivalent path length, field size scaling
Calculated non-weighted field average
Avg= -0.29%
The Ohio State University Comprehensive Cancer Center –
Arthur G. James Cancer Hospital and Richard J. Solove
Research Institute 7
8. Secondary Forward Calculation
IBA’s Compass
software TPS Dose Forward Calculated Dose
Collapsed cone
algorithm
Incorporates
heterogeneity
calculations
Subject to
commissioning
differences
DVH Comparison Dose Difference Map
The Ohio State University Comprehensive Cancer Center –
Arthur G. James Cancer Hospital and Richard J. Solove
Research Institute 8
9. Compass Beam Model Commissioning
Same input data/physicist commissioning as Eclipse
Good agreement down to 3X3cm in solid water
TPS Dose Forward Calculated Dose
DVH Comparison Dose Difference Map
The Ohio State University Comprehensive Cancer Center –
Arthur G. James Cancer Hospital and Richard J. Solove
Research Institute 9
10. Backprojected 3D Dose
IBA’s Compass software, Matrixx hardware used
Mean PTV dose and DVH statistics analyzed
TPS Dose Backprojected Dose
DVH Comparison Dose Difference Map
The Ohio State University Comprehensive Cancer Center –
Arthur G. James Cancer Hospital and Richard J. Solove
Research Institute 10
12. Comparison Methods
Retrospective analysis using point measurements
Average point calculation vs. Eclipse prescription
Comparison of calculation differences
Forward calculation vs. Eclipse for mean PTV
Measured backprojected dose compared to TPS
Backprojection vs. Eclipse for mean PTV
Within Compass differences
Backprojected vs. forward calculation for mean PTV
The Ohio State University Comprehensive Cancer Center –
Arthur G. James Cancer Hospital and Richard J. Solove
Research Institute 12
13. Percent Change from Eclipse Mean PTV
35
Point Calculation to TPS
30
25
Number of Plans
20
15
10
5
0
-5 to -4 -4 to -3 -3 to -2 -2 to -1 -1 to 0 0 to +1 +1 to +2 +2 to +3 +3 to +4 +4 to +5 +5 to +6
Percent Difference
The Ohio State University Comprehensive Cancer Center –
Arthur G. James Cancer Hospital and Richard J. Solove
Research Institute 13
14. Percent Change from Eclipse Mean PTV
35
Point Calculation to TPS
30
Forward Calculated in Compass to TPS
25
Number of Plans
20
15
10
5
0
-5 to -4 -4 to -3 -3 to -2 -2 to -1 -1 to 0 0 to +1 +1 to +2 +2 to +3 +3 to +4 +4 to +5 +5 to +6
Percent Difference
The Ohio State University Comprehensive Cancer Center –
Arthur G. James Cancer Hospital and Richard J. Solove
Research Institute 14
15. Percent Change from Eclipse Mean PTV
35
Point Calculation to TPS
30
Forward Calculated in Compass to TPS
25
Backprojected to TPS
Number of Plans
20
15
10
5
0
-5 to -4 -4 to -3 -3 to -2 -2 to -1 -1 to 0 0 to +1 +1 to +2 +2 to +3 +3 to +4 +4 to +5 +5 to +6
Percent Difference
The Ohio State University Comprehensive Cancer Center –
Arthur G. James Cancer Hospital and Richard J. Solove
Research Institute 15
16. Percent Change from Eclipse Mean PTV
35
Point Calculation to TPS
30
Forward Calculated in Compass to TPS
25
Backprojected to TPS
Number of Plans
20
Backprojected to Forward Calculated
15
10
5
0
-5 to -4 -4 to -3 -3 to -2 -2 to -1 -1 to 0 0 to +1 +1 to +2 +2 to +3 +3 to +4 +4 to +5 +5 to +6
Percent Difference
The Ohio State University Comprehensive Cancer Center –
Arthur G. James Cancer Hospital and Richard J. Solove
Research Institute 16
17. Mean PTVs Aren’t the Whole Story Though
The Ohio State University Comprehensive Cancer Center –
Arthur G. James Cancer Hospital and Richard J. Solove
Research Institute 17
18. OAR/Coverage Statistics for Backprojected
Dose Compared to Eclipse
OARs receiving >20% of Rx, dose maximum analyzed
Max dose for 2 spinal cord structures increased >5%
Max dose for 1 esophagus structure increased >5%
Max dose for 1 heart structure increased >5%
Max dose for 1 brachial plexus structure increased >5%
Max dose for 0 skin structures increased >5%
Coverage of 95% isodose line
2 PTVs experienced a drop in 95% coverage of more
than 5%
The Ohio State University Comprehensive Cancer Center –
Arthur G. James Cancer Hospital and Richard J. Solove
Research Institute 18
20. Summary
QA is a process of constant improvement
Ultimately TPS determines dose
What to trust determines the success of QA
Measurements/reconstructed dose have value, but
resources must be used wisely
3D reconstructed dose provides variability analysis
for plans that pass traditional QA procedures
The Ohio State University Comprehensive Cancer Center –
Arthur G. James Cancer Hospital and Richard J. Solove
Research Institute 20
21. The Ohio State University Comprehensive Cancer Center –
The OhioG. James Cancer Hospital and Richard J.Center –
Arthur State University Comprehensive Cancer Solove
Arthur G. James Cancer Hospital and Richard J. Solove Research Institute
Research Institute 21
22. References
Per-beam, Planar IMRT QA Passing Rates Do Not
Predict Clinically Relevant Patient Dose Errors, 2011
Comparison of DVH data from multiple radiotherapy
treatment planning systems, 2010
US Patent Application: Radiation Therapy Dose
Perturbation System and Method, 2009
The Ohio State University Comprehensive Cancer Center –
Arthur G. James Cancer Hospital and Richard J. Solove
Research Institute 22
23. Bonus 1: 3DVH
Sun Nuclear software using dose error kernels
Compared Compass to TPS differences >3% for
mean PTV
Percent Change Between QA Methods
35
3DVH Measured to TPS
30
Compass Measured to Forward Calculated
25
Number of Plans
Compass Measured to TPS
20
15
10
5
0
-5 to -4 -4 to -3 -3 to -2 -2 to -1 -1 to 0 0 to +1 +1 to +2 +2 to +3 +3 to +4 +4 to +5 +5 to +6
%Difference
The Ohio State University Comprehensive Cancer Center –
Arthur G. James Cancer Hospital and Richard J. Solove
Research Institute 23