This document describes a method for detecting pulmonary nodules in CT scans using genetic programming. It first segments the lung regions from CT images and extracts nodule candidates. Features are then extracted from the candidates. Genetic programming is used to classify candidates as nodules or non-nodules by optimizing combinations of features. The method was tested on a publicly available lung image database, achieving a true positive rate of over 90% and low false positive rate.
computer aided detection of pulmonary nodules in ct scansWookjin Choi
The document discusses computer aided detection of pulmonary nodules in CT scans. It introduces lung cancer as a major health problem and describes how detecting nodules early can improve survival rates. It then provides an overview of pulmonary nodule detection CAD systems, describing their general structure and evaluating various approaches in the literature. Key contributions are genetic programming and shape-based classifiers and a hierarchical block analysis method that achieved high performance on a publicly available lung image database.
Computer-aided Detection of Pulmonary Nodules using Genetic ProgrammingWookjin Choi
This document describes a study that used genetic programming to develop an accurate classifier for detecting pulmonary nodules on CT scans. The proposed method involved segmenting the lungs, detecting nodule candidates, extracting features, and using genetic programming to evolve a combination of features and functions to classify nodules versus non-nodules. When tested on 153 nodules across 32 CT scans, the genetic programming classifier achieved a sensitivity of 92% and specificity of 86%.
Image processing in lung cancer screening and treatmentWookjin Choi
The document discusses image processing techniques for lung cancer screening and treatment. It covers topics like lung segmentation, nodule detection, computer-aided diagnosis, image-guided radiotherapy, and quantitative assessment of tumor response. Lung segmentation is used to isolate the lungs from other organs in CT images. Nodule detection algorithms then aim to find potential cancerous nodules. Computer-aided diagnosis systems analyze extracted features of nodules to determine if they are malignant or benign. Image-guided radiotherapy utilizes 4D CT and gating to account for tumor motion during treatment. Quantitative metrics like standardized uptake value are used to assess tumor response in PET imaging.
automatic detection of pulmonary nodules in lung ct imagesWookjin Choi
The document discusses lung cancer detection using CT scans and pulmonary nodule detection systems. It describes how CT scans are used to detect lung nodules early and increase survival rates. It then discusses the challenges of evaluating large CT data sets and the use of pulmonary nodule detection CAD systems to assist radiologists. The document goes on to describe a proposed CAD system that includes lung segmentation, nodule candidate detection using multi-thresholding and feature extraction, and a genetic programming based classifier to analyze features and detect nodules. Experimental results on a publicly available lung image database show the system achieved over 80% accuracy on test data for nodule detection.
Robust breathing signal extraction from cone beam CT projections based on ada...Wookjin Choi
This document summarizes a research paper that proposes a novel method for extracting breathing signals from cone beam CT projections without using external markers. The method uses an adaptive filtering technique to enhance weak oscillating structures in the Amsterdam Shroud image generated from the projections. A two-step optimization approach is then used to reveal the large-scale regularity of the breathing signals. Evaluation on 5 patient data sets found the new algorithm outperformed existing methods by extracting less noisy signals with errors of only -0.07±1.58 breaths per minute compared to reference signals. While results are promising, the study had a small data set and image quality remains limited.
Optimal fuzzy rule based pulmonary nodule detectionWookjin Choi
The document describes a lung cancer detection system that uses CT scans. It discusses (1) segmenting the lungs from CT images using adaptive thresholding and connected component analysis, (2) detecting nodule candidate regions using multi-thresholding and rule-based pruning, and (3) optimizing the rule-based pruning using a genetic algorithm trained fuzzy inference system to reduce false positives while maintaining high sensitivity. Experimental results on a publicly available lung image database show the optimized fuzzy system achieved better performance than a conventional rule-based approach.
Image quality assessment of contrast-enhanced 4D-CT for pancreatic adenocarci...Wookjin Choi
Quantitative and qualitative assessment of the image qualities in contrast-enhanced (CE) 3D-CT, 4D-CT and CE 4D-CT to identify feasibility for replacing the current clinical standard simulation with a single CE 4D-CT for pancreatic adenocarcinoma (PDA) in radiotherapy simulation.
computer aided detection of pulmonary nodules in ct scansWookjin Choi
The document discusses computer aided detection of pulmonary nodules in CT scans. It introduces lung cancer as a major health problem and describes how detecting nodules early can improve survival rates. It then provides an overview of pulmonary nodule detection CAD systems, describing their general structure and evaluating various approaches in the literature. Key contributions are genetic programming and shape-based classifiers and a hierarchical block analysis method that achieved high performance on a publicly available lung image database.
Computer-aided Detection of Pulmonary Nodules using Genetic ProgrammingWookjin Choi
This document describes a study that used genetic programming to develop an accurate classifier for detecting pulmonary nodules on CT scans. The proposed method involved segmenting the lungs, detecting nodule candidates, extracting features, and using genetic programming to evolve a combination of features and functions to classify nodules versus non-nodules. When tested on 153 nodules across 32 CT scans, the genetic programming classifier achieved a sensitivity of 92% and specificity of 86%.
Image processing in lung cancer screening and treatmentWookjin Choi
The document discusses image processing techniques for lung cancer screening and treatment. It covers topics like lung segmentation, nodule detection, computer-aided diagnosis, image-guided radiotherapy, and quantitative assessment of tumor response. Lung segmentation is used to isolate the lungs from other organs in CT images. Nodule detection algorithms then aim to find potential cancerous nodules. Computer-aided diagnosis systems analyze extracted features of nodules to determine if they are malignant or benign. Image-guided radiotherapy utilizes 4D CT and gating to account for tumor motion during treatment. Quantitative metrics like standardized uptake value are used to assess tumor response in PET imaging.
automatic detection of pulmonary nodules in lung ct imagesWookjin Choi
The document discusses lung cancer detection using CT scans and pulmonary nodule detection systems. It describes how CT scans are used to detect lung nodules early and increase survival rates. It then discusses the challenges of evaluating large CT data sets and the use of pulmonary nodule detection CAD systems to assist radiologists. The document goes on to describe a proposed CAD system that includes lung segmentation, nodule candidate detection using multi-thresholding and feature extraction, and a genetic programming based classifier to analyze features and detect nodules. Experimental results on a publicly available lung image database show the system achieved over 80% accuracy on test data for nodule detection.
Robust breathing signal extraction from cone beam CT projections based on ada...Wookjin Choi
This document summarizes a research paper that proposes a novel method for extracting breathing signals from cone beam CT projections without using external markers. The method uses an adaptive filtering technique to enhance weak oscillating structures in the Amsterdam Shroud image generated from the projections. A two-step optimization approach is then used to reveal the large-scale regularity of the breathing signals. Evaluation on 5 patient data sets found the new algorithm outperformed existing methods by extracting less noisy signals with errors of only -0.07±1.58 breaths per minute compared to reference signals. While results are promising, the study had a small data set and image quality remains limited.
Optimal fuzzy rule based pulmonary nodule detectionWookjin Choi
The document describes a lung cancer detection system that uses CT scans. It discusses (1) segmenting the lungs from CT images using adaptive thresholding and connected component analysis, (2) detecting nodule candidate regions using multi-thresholding and rule-based pruning, and (3) optimizing the rule-based pruning using a genetic algorithm trained fuzzy inference system to reduce false positives while maintaining high sensitivity. Experimental results on a publicly available lung image database show the optimized fuzzy system achieved better performance than a conventional rule-based approach.
Image quality assessment of contrast-enhanced 4D-CT for pancreatic adenocarci...Wookjin Choi
Quantitative and qualitative assessment of the image qualities in contrast-enhanced (CE) 3D-CT, 4D-CT and CE 4D-CT to identify feasibility for replacing the current clinical standard simulation with a single CE 4D-CT for pancreatic adenocarcinoma (PDA) in radiotherapy simulation.
Individually Optimized Contrast-Enhanced 4D-CT for Radiotherapy Simulation in...Wookjin Choi
To develop an individually optimized contrast-enhanced (CE) 4D-CT for radiotherapy simulation in pancreatic ductal adenocarcinomas (PDA).
http://scitation.aip.org/content/aapm/journal/medphys/43/6/10.1118/1.4958261
Identification of Robust Normal Lung CT Texture FeaturesWookjin Choi
Normal lung CT texture features have been used for the prediction of radiation-induced lung disease (radiation pneumonitis and radiation fibrosis). For these features to be clinically useful, they need to be relatively invariant (robust) to tumor size and not correlated with normal lung volume.
http://scitation.aip.org/content/aapm/journal/medphys/43/6/10.1118/1.4955803
Image segmentation is still an active reason of research, a relevant research area
in computer vision and hundreds of image segmentation techniques have been proposed by
the researchers. All proposed techniques have their own usability and accuracy. In this paper
we are going present a review of some best lung nodule existing detection and segmentation
techniques. Finally, we conclude by focusing one of the best methods that may have high
level accuracy and can be used in detection of lung very small nodules accurately.
Radiomics and Deep Learning for Lung Cancer ScreeningWookjin Choi
The document summarizes research on using radiomics and deep learning approaches for lung cancer screening. It describes:
1) Using radiomic features like shape, texture, and intensity from lung nodules on CT scans and an SVM-LASSO model to classify nodules with 87.9% sensitivity and 78.2% specificity, outperforming the Lung-RADS system.
2) A deep learning model developed for a Kaggle competition that achieved 67.4% accuracy on nodule classification but only ranked 99th due to overfitting issues without enough data.
3) Future work could integrate quantification of nodule characteristics like spiculation with plasma biomarkers to improve diagnostic accuracy.
Individually Optimized Contrast-Enhanced 4D-CT for Radiotherapy Simulation in...Wookjin Choi
Purpose/Objectives: To develop an individually optimized contrast-enhanced (CE) 4D-CT for radiotherapy simulation in pancreatic adenocarcinoma (PDA).
Materials/Methods: Ten PDA patients were enrolled and underwent three CT scans: a 4D-CT immediately following a CE 3D-CT, and an individually optimized CE 4D-CT using a test injection to estimate the peak contrast enhancement time and to optimize the delay time. Three physicians contoured the tumor and pancreatic tissues. We compared image quality scores, tumor volume, motion, image noise, tumor-to-pancreas contrast, and contrast-to- noise ratio (CNR) in the three CTs. We also evaluated inter-observer variations in contouring the tumor using simultaneous truth and performance level estimation (STAPLE).
Results: The average image quality scores for CE 3D-CT and CE 4D-CT were comparable (4.0 and 3.8, p=0.47), and both were significantly better than that for 4D-CT (2.6, p<0.001). The tumor-to- pancreas contrast in CE 3D-CT and CE 4D-CT were comparable (15.5 and 16.7 HU, p=0.71), and the later was significantly higher than that in 4D-CT (9.2 HU, p=0.03). Image noise in CE 3D-CT (12.5 HU) was significantly lower than that in CE 4D-CT (22.1 HU, p<0.001) and 4D-CT (19.4 HU, p=0.005). The CNR in CE 3D-CT and CE 4D-CT were comparable (1.4 and 0.8, p=0.23), and the former was significantly better than that in 4D-CT (0.6, p=0.04). The average tumor volume was smaller in CE 3D-CT (29.8 cm 3 ) and CE 4D-CT (22.8 cm 3 ) than in 4D-CT (42.0 cm 3 ), though the differences were not statistically significant. The tumor motion was comparable in 4D-CT and CE 4D-CT (7.2 and 6.2 mm, p=0.23). The inter-observer variations were comparable in CE 3D-CT and CE 4D-CT (Jaccard index 66.0% and 61.9%), and the former was significantly smaller than that of 4D-CT (55.6%, p=0.047).
Conclusions: The CE 4D-CT demonstrated largely comparable characteristics to the CE 3D-CT. It has high potential for simultaneously delineating the tumor and quantifying the tumor motion with a single scan.
Robust Normal Lung CT Texture Features for the Prediction of Radiation-Induce...Wookjin Choi
Abstract
Purpose/Objective(s)
Normal lung CT texture features have been used for the prediction of radiation-induced lung disease (RILD) - pneumonitis and fibrosis. For these features to be clinically useful, they need to be relatively invariant (robust) to tumor size and not correlated with normal lung volume.
Materials/Methods
The free-breathing CTs of 14 lung SBRT patients were studied. Different sizes of GTVs were simulated with spheres (diameters 10 to 60 mm) placed in the lung contralateral to the tumor. Twenty-seven texture features (9 from intensity histogram, 8 from the gray-level co-occurrence matrix [GLCM], and 10 from the gray-level run-length matrix [GLRM]) were extracted from [lung – GTV]. The Bland-Altman method was applied to measure the normalized range of agreement (nRoA) of each texture feature when GTV size varied. A feature was considered as robust when its nRoA was less than that of [lung – GTV] volume (8.8%) and regarded as not correlated when their absolute correlation coefficient was lower than 0.70.
Results
Eighteen texture features were identified as robust. All intensity histogram features were robust except sum and kurtosis. All GLCM features were robust except energy and Haralick's Correlation. Five GLRM features (two run emphasis and three high gray-level emphasis) were robust while the other five (two nonuniformity and three low gray-level emphasis) were nonrobust. Particularly, all three low gray-level emphasis features had extremely large nRoAs (∼30%), indicating huge variations when GTV size changed. None of the robust features was correlated with the normal lung [lung – GTV] volume, suggesting that they can provide additional information. Three nonrobust features (sum and two nonuniformity features) were highly correlated with the normal lung volume. None feature showed statistically significant differences (P < 0.05) with respect to GTV location (upper vs. lower lobe).
Conclusion
We identified 18 robust lung CT texture features which were invariant to varying tumor volumes. Particularly the three GLRM high gray-level emphasis features can characterize the radiologic manifestations of pulmonary abnormalities. Hence these features can be further examined for the prediction of the RILD.
Robust Normal Lung CT Texture Features for the Prediction of Radiation-Induce...Wookjin Choi
Abstract
Purpose: Normal lung CT texture features have been used for the prediction of radiation-induced lung disease (RILD) - pneumonitis and fibrosis. For these features to be clinically useful, they should be robust (relatively invariant or unbiased) to tumor size variations and not correlated (non-redundant) with the normal lung volume of interest, i.e., volume of the peri-tumoral region.
Methods: CT images of 14 lung cancer patients were studied. Different sizes of gross tumor volumes (GTVs) were simulated with spheres (diameters 10 to 60 mm) and placed in the lung contralateral to the tumor. 27 texture features [nine from intensity histogram, eight from the gray-level co-occurrence matrix (GLCM) and ten from the gray-level run-length matrix (GLRM)] were extracted from the peri-tumoral region (uniform 30 mm expansion around the GTV in the lung). The Bland-Altman analysis was applied to measure the normalized range of agreement (nRoA) for each feature when GTV size varied. A feature was considered as robust when its nRoA was less than the threshold (100%), which was chosen at the nRoA of the volume of the peri-tumoral region with modification based on the cumulative graph of features vs. nRoA. A feature was regarded as not correlated with the volume of the peri-tumoral region when their correlation was lower than 0.70.
Results: 16 of the 27 texture features were identified as robust. All intensity histogram features were robust except sum and kurtosis. All GLCM features were robust except cluster shade, cluster prominence, and Haralick's Correlation. Five GLRM features (two run emphasis and three high gray-level emphasis) were robust while the other five (two nonuniformity and three low gray-level emphasis) were unrobost. None of the robust features was correlated with the volume of the peri-tumoral region. No feature showed statistically significant differences (P<0.05) on GTV location (upper vs. lower lobe).
Conclusion: We identified 16 robust normal lung CT texture features that can be further examined for the prediction of RILD. Particularly, GLRM high gray-level emphasis features were robust and characterized the radiologic manifestations of pulmonary abnormalities.
Quantitative Image Feature Analysis of Multiphase Liver CT for Hepatocellular...Wookjin Choi
To identify the effective quantitative image features (radiomics features) for prediction of response, survival, recurrence and metastasis of hepatocellular carcinoma (HCC) in radiotherapy.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Lung Nodule Segmentation in CT Images using Rotation Invariant Local Binary P...IDES Editor
As the lung cancer is the leading cause of cancer
death in the medical field, Computed Tomography (CT) scan
of the thorax is widely applied in diagnoses for identifying
the lung cancer. In this paper, a technique of rotation invariant
with Local Binary Pattern (LBP) for segmentation of various
lung nodules from the Lung CT cancer data sets is used. This
is tested on various lung data sets from teaching files of
Casimage database and National Cancer Institute (NCI) of
National Biomedical Imaging Archive (NBIA). The results
show the segmented nodules with clear boundaries, which is
helpful in diagnosis of lung cancer. Further, the results are
compared with the watershed segmentation method, which
shows that LBP based method yields better segmentation
accuracy.
PERFORMANCE EVALUATION OF TUMOR DETECTION TECHNIQUES ijcsa
Automatic segmentation of tumor plays a vital role in diagnosis and surgical planning. This paper deals
with techniques which providing solution for detecting hepatic tumor in Computed Tomography (CT)
images. The main aim of this work is to analyze performance of tumor detection techniques like Knowledge
Based Constraints, Graph Cut Method and Gradient Vector Flow active contour. These three techniques
are computed using sensitivity, specificity and accuracy. From the evaluated result, knowledge based
constraints method is better than other graph cut method and gradient vector flow active contour.
Aggressive Lung Adenocarcinoma Subtype Prediction Using FDG-PET/CT RadiomicsWookjin Choi
Purpose: To predict the histopathologic subtypes with poor surgery prognosis in early stage lung adenocarcinomas using CT and PET radiomics.
Methods: We retrospectively enrolled 53 patients with stage I lung adenocarcinoma who underwent both diagnostic CT and 18F-fluorodeoxyglucose (FDG) PET/CT before complete surgical resection of the tumors. Tumor segmentation was manually contoured by a physician on both the diagnostic CT and the attenuation CT of PET/CT.A total of 170 radiomics features were extracted on both PET and CT images to design predictive models for two histopathologic endpoints: (1) tumors with solid or micropapillary predominant subtype (aggressiveness), and (2) tumors with micropapillary component more than 5% (MIP5). We used least absolute shrinkage and selection operator (LASSO) as a model building method coupled with a class separability feature selection (CSFS) method. For an unbiased model estimate, a 10-fold cross validation approach was used. The area under the curve (AUC) and prediction accuracy were employed to evaluate the performance of the model. P-values were computed using Wilcoxon rank-sum test.
Results: Of the 53 patients, 9 and 15 had tumors with aggressiveness and MIP5, respectively. For both endpoints, LASSO models with two PET radiomics features achieved the best performance. For aggressiveness, the LASSO model with PET Cluster Shade and PET 2D Variance resulted in 77.6±2.3% accuracy and 0.71±0.02 AUC (P = 0.011). For MIP5, the LASSO model with PET Eccentricity and PET Cluster Shade resulted in 69.6±3.1% accuracy and 0.68±0.04 AUC (P=0.014). The PET Cluster Shade was commonly selected in both models. Cluster shade is a texture feature that measures the skewness of the co-occurrence matrix. Higher PET cluster shade predicted that the tumor was more aggressive and more likely MIP5.
Conclusion: We showed that PET/CT radiomics features can predict tumor aggressiveness.
Funding Support, Disclosures, and Conflict of Interest: This work was supported in part by the National Cancer Institute Grants R01CA172638.
Radiomics Analysis of Pulmonary Nodules in Low Dose CT for Early Detection of...Wookjin Choi
Purpose: To predict the histopathologic subtypes with poor surgery prognosis in early stage lung adenocarcinomas using CT and PET radiomics.
Methods: We retrospectively enrolled 53 patients with stage I lung adenocarcinoma who underwent both diagnostic CT and 18F-fluorodeoxyglucose (FDG) PET/CT before complete surgical resection of the tumors. Tumor segmentation was manually contoured by a physician on both the diagnostic CT and the attenuation CT of PET/CT.A total of 170 radiomics features were extracted on both PET and CT images to design predictive models for two histopathologic endpoints: (1) tumors with solid or micropapillary predominant subtype (aggressiveness), and (2) tumors with micropapillary component more than 5% (MIP5). We used least absolute shrinkage and selection operator (LASSO) as a model building method coupled with a class separability feature selection (CSFS) method. For an unbiased model estimate, a 10-fold cross validation approach was used. The area under the curve (AUC) and prediction accuracy were employed to evaluate the performance of the model. P-values were computed using Wilcoxon rank-sum test.
Results: Of the 53 patients, 9 and 15 had tumors with aggressiveness and MIP5, respectively. For both endpoints, LASSO models with two PET radiomics features achieved the best performance. For aggressiveness, the LASSO model with PET Cluster Shade and PET 2D Variance resulted in 77.6±2.3% accuracy and 0.71±0.02 AUC (P = 0.011). For MIP5, the LASSO model with PET Eccentricity and PET Cluster Shade resulted in 69.6±3.1% accuracy and 0.68±0.04 AUC (P=0.014). The PET Cluster Shade was commonly selected in both models. Cluster shade is a texture feature that measures the skewness of the co-occurrence matrix. Higher PET cluster shade predicted that the tumor was more aggressive and more likely MIP5.
Conclusion: We showed that PET/CT radiomics features can predict tumor aggressiveness.
Funding Support, Disclosures, and Conflict of Interest: This work was supported in part by the National Cancer Institute Grants R01CA172638.
The International Journal of Engineering and Science (The IJES)theijes
This document summarizes a research paper on developing a computer-aided diagnosis system for early detection of liver cancer from CT chest images. The proposed system involves extracting features from segmented liver regions of CT images using techniques like noise removal, segmentation, and morphological operations. Features are then extracted and can be classified using Hidden Markov Models to identify liver cancer cells at an early stage and improve diagnosis. The authors suggest future work to refine cancer cell classification and reduce time complexity for improved diagnosis confidence.
Prediction of lung cancer is most challenging problem due to structure of cancer cell, where most of the cells are overlapped each other. The image processing techniques are mostly used for prediction of lung cancer and also for early detection and treatment to prevent the lung cancer. To predict the lung cancer various features are extracted from the images therefore, pattern recognition based approaches are useful to predict the lung cancer. Here, a comprehensive review for the prediction of lung cancer by previous researcher using image processing techniques is presented. The summary for the prediction of lung cancer by previous researcher using image processing techniques is also presented.
Artificial Intelligence in Radiation OncologyWookjin Choi
The document discusses artificial intelligence applications in radiation oncology. It begins with acknowledgements and then outlines topics including radiomics decision support tools, automatic delineation and variability analysis, and applications like lung cancer screening, tumor response prediction, and aggressive lung adenocarcinoma subtype prediction. Radiomics frameworks and deep learning models are presented. Results show potential for AI to provide quantitative imaging biomarkers and improve outcomes in areas like screening, treatment planning, and response assessment.
IRJET- Image Processing based Lung Tumor Detection System for CT ImagesIRJET Journal
This document presents a method for detecting lung tumors in CT scan images using image processing techniques. The proposed method involves preprocessing images using median filtering for noise removal and contrast adjustment for enhancement. The lungs are then segmented from the images using mathematical morphology. Geometric and textural features are extracted from the segmented region of interest and used as input for an SVM classifier to detect lung cancer. The methodology was tested on a dataset from The Cancer Imaging Archive and was able to successfully detect lung tumors in CT images.
Image-guided liver cancer modeling for computer-aided diagnosis and treatmentAntoine Vacavant
HCC (hepatocellular carcinoma) is the most common primary liver cancer, and the third leading cause of death worldwide. Diagnosis is generally conducted through various medical imaging modalities (ultra-sound, CT, MRI) and, depending on the characterization of HCC (number and size of nodules, early or later staging, etc.), different therapeutic strategies can be delivered to the patient: radiofrequency ablation, liver resection surgery, chemo-embolization, etc. During this talk, I first present a novel ontological approach to represent both HCC detection, staging and treatment into a single information system framework, enabling a complete digital patient follow-up. Since representing numerically liver's geometry is an important concern in such system, I then expose our most recent algorithms devoted to reconstruct the liver volume and inner vessels in 3-D from CT and MRI data. We also see different applications employing the outcomes provided by these tools. (1) The standardized Couinaud liver representation can be calculated thanks to the shape of the vasculature, and permits to locate HCC nodules in a reproducible way. (2) By isolating liver volume, we have proposed to automatically detect HCC tissues within DCE-MRI (dynamic contrast-enhanced MRI) sequences by two approaches: SVM-based classification and adapted U-Net deep learning. (3) We have also studied the numerical simulation of hepatic perfusion by considering finite-element models of the liver and its vessels. This talk finishes by exposing our future prospects in improving our methodologies and combining them for proposing novel computer-aided HCC diagnosis and treatment systems.
Individually Optimized Contrast-Enhanced 4D-CT for Radiotherapy Simulation in...Wookjin Choi
To develop an individually optimized contrast-enhanced (CE) 4D-CT for radiotherapy simulation in pancreatic ductal adenocarcinomas (PDA).
http://scitation.aip.org/content/aapm/journal/medphys/43/6/10.1118/1.4958261
Identification of Robust Normal Lung CT Texture FeaturesWookjin Choi
Normal lung CT texture features have been used for the prediction of radiation-induced lung disease (radiation pneumonitis and radiation fibrosis). For these features to be clinically useful, they need to be relatively invariant (robust) to tumor size and not correlated with normal lung volume.
http://scitation.aip.org/content/aapm/journal/medphys/43/6/10.1118/1.4955803
Image segmentation is still an active reason of research, a relevant research area
in computer vision and hundreds of image segmentation techniques have been proposed by
the researchers. All proposed techniques have their own usability and accuracy. In this paper
we are going present a review of some best lung nodule existing detection and segmentation
techniques. Finally, we conclude by focusing one of the best methods that may have high
level accuracy and can be used in detection of lung very small nodules accurately.
Radiomics and Deep Learning for Lung Cancer ScreeningWookjin Choi
The document summarizes research on using radiomics and deep learning approaches for lung cancer screening. It describes:
1) Using radiomic features like shape, texture, and intensity from lung nodules on CT scans and an SVM-LASSO model to classify nodules with 87.9% sensitivity and 78.2% specificity, outperforming the Lung-RADS system.
2) A deep learning model developed for a Kaggle competition that achieved 67.4% accuracy on nodule classification but only ranked 99th due to overfitting issues without enough data.
3) Future work could integrate quantification of nodule characteristics like spiculation with plasma biomarkers to improve diagnostic accuracy.
Individually Optimized Contrast-Enhanced 4D-CT for Radiotherapy Simulation in...Wookjin Choi
Purpose/Objectives: To develop an individually optimized contrast-enhanced (CE) 4D-CT for radiotherapy simulation in pancreatic adenocarcinoma (PDA).
Materials/Methods: Ten PDA patients were enrolled and underwent three CT scans: a 4D-CT immediately following a CE 3D-CT, and an individually optimized CE 4D-CT using a test injection to estimate the peak contrast enhancement time and to optimize the delay time. Three physicians contoured the tumor and pancreatic tissues. We compared image quality scores, tumor volume, motion, image noise, tumor-to-pancreas contrast, and contrast-to- noise ratio (CNR) in the three CTs. We also evaluated inter-observer variations in contouring the tumor using simultaneous truth and performance level estimation (STAPLE).
Results: The average image quality scores for CE 3D-CT and CE 4D-CT were comparable (4.0 and 3.8, p=0.47), and both were significantly better than that for 4D-CT (2.6, p<0.001). The tumor-to- pancreas contrast in CE 3D-CT and CE 4D-CT were comparable (15.5 and 16.7 HU, p=0.71), and the later was significantly higher than that in 4D-CT (9.2 HU, p=0.03). Image noise in CE 3D-CT (12.5 HU) was significantly lower than that in CE 4D-CT (22.1 HU, p<0.001) and 4D-CT (19.4 HU, p=0.005). The CNR in CE 3D-CT and CE 4D-CT were comparable (1.4 and 0.8, p=0.23), and the former was significantly better than that in 4D-CT (0.6, p=0.04). The average tumor volume was smaller in CE 3D-CT (29.8 cm 3 ) and CE 4D-CT (22.8 cm 3 ) than in 4D-CT (42.0 cm 3 ), though the differences were not statistically significant. The tumor motion was comparable in 4D-CT and CE 4D-CT (7.2 and 6.2 mm, p=0.23). The inter-observer variations were comparable in CE 3D-CT and CE 4D-CT (Jaccard index 66.0% and 61.9%), and the former was significantly smaller than that of 4D-CT (55.6%, p=0.047).
Conclusions: The CE 4D-CT demonstrated largely comparable characteristics to the CE 3D-CT. It has high potential for simultaneously delineating the tumor and quantifying the tumor motion with a single scan.
Robust Normal Lung CT Texture Features for the Prediction of Radiation-Induce...Wookjin Choi
Abstract
Purpose/Objective(s)
Normal lung CT texture features have been used for the prediction of radiation-induced lung disease (RILD) - pneumonitis and fibrosis. For these features to be clinically useful, they need to be relatively invariant (robust) to tumor size and not correlated with normal lung volume.
Materials/Methods
The free-breathing CTs of 14 lung SBRT patients were studied. Different sizes of GTVs were simulated with spheres (diameters 10 to 60 mm) placed in the lung contralateral to the tumor. Twenty-seven texture features (9 from intensity histogram, 8 from the gray-level co-occurrence matrix [GLCM], and 10 from the gray-level run-length matrix [GLRM]) were extracted from [lung – GTV]. The Bland-Altman method was applied to measure the normalized range of agreement (nRoA) of each texture feature when GTV size varied. A feature was considered as robust when its nRoA was less than that of [lung – GTV] volume (8.8%) and regarded as not correlated when their absolute correlation coefficient was lower than 0.70.
Results
Eighteen texture features were identified as robust. All intensity histogram features were robust except sum and kurtosis. All GLCM features were robust except energy and Haralick's Correlation. Five GLRM features (two run emphasis and three high gray-level emphasis) were robust while the other five (two nonuniformity and three low gray-level emphasis) were nonrobust. Particularly, all three low gray-level emphasis features had extremely large nRoAs (∼30%), indicating huge variations when GTV size changed. None of the robust features was correlated with the normal lung [lung – GTV] volume, suggesting that they can provide additional information. Three nonrobust features (sum and two nonuniformity features) were highly correlated with the normal lung volume. None feature showed statistically significant differences (P < 0.05) with respect to GTV location (upper vs. lower lobe).
Conclusion
We identified 18 robust lung CT texture features which were invariant to varying tumor volumes. Particularly the three GLRM high gray-level emphasis features can characterize the radiologic manifestations of pulmonary abnormalities. Hence these features can be further examined for the prediction of the RILD.
Robust Normal Lung CT Texture Features for the Prediction of Radiation-Induce...Wookjin Choi
Abstract
Purpose: Normal lung CT texture features have been used for the prediction of radiation-induced lung disease (RILD) - pneumonitis and fibrosis. For these features to be clinically useful, they should be robust (relatively invariant or unbiased) to tumor size variations and not correlated (non-redundant) with the normal lung volume of interest, i.e., volume of the peri-tumoral region.
Methods: CT images of 14 lung cancer patients were studied. Different sizes of gross tumor volumes (GTVs) were simulated with spheres (diameters 10 to 60 mm) and placed in the lung contralateral to the tumor. 27 texture features [nine from intensity histogram, eight from the gray-level co-occurrence matrix (GLCM) and ten from the gray-level run-length matrix (GLRM)] were extracted from the peri-tumoral region (uniform 30 mm expansion around the GTV in the lung). The Bland-Altman analysis was applied to measure the normalized range of agreement (nRoA) for each feature when GTV size varied. A feature was considered as robust when its nRoA was less than the threshold (100%), which was chosen at the nRoA of the volume of the peri-tumoral region with modification based on the cumulative graph of features vs. nRoA. A feature was regarded as not correlated with the volume of the peri-tumoral region when their correlation was lower than 0.70.
Results: 16 of the 27 texture features were identified as robust. All intensity histogram features were robust except sum and kurtosis. All GLCM features were robust except cluster shade, cluster prominence, and Haralick's Correlation. Five GLRM features (two run emphasis and three high gray-level emphasis) were robust while the other five (two nonuniformity and three low gray-level emphasis) were unrobost. None of the robust features was correlated with the volume of the peri-tumoral region. No feature showed statistically significant differences (P<0.05) on GTV location (upper vs. lower lobe).
Conclusion: We identified 16 robust normal lung CT texture features that can be further examined for the prediction of RILD. Particularly, GLRM high gray-level emphasis features were robust and characterized the radiologic manifestations of pulmonary abnormalities.
Quantitative Image Feature Analysis of Multiphase Liver CT for Hepatocellular...Wookjin Choi
To identify the effective quantitative image features (radiomics features) for prediction of response, survival, recurrence and metastasis of hepatocellular carcinoma (HCC) in radiotherapy.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Lung Nodule Segmentation in CT Images using Rotation Invariant Local Binary P...IDES Editor
As the lung cancer is the leading cause of cancer
death in the medical field, Computed Tomography (CT) scan
of the thorax is widely applied in diagnoses for identifying
the lung cancer. In this paper, a technique of rotation invariant
with Local Binary Pattern (LBP) for segmentation of various
lung nodules from the Lung CT cancer data sets is used. This
is tested on various lung data sets from teaching files of
Casimage database and National Cancer Institute (NCI) of
National Biomedical Imaging Archive (NBIA). The results
show the segmented nodules with clear boundaries, which is
helpful in diagnosis of lung cancer. Further, the results are
compared with the watershed segmentation method, which
shows that LBP based method yields better segmentation
accuracy.
PERFORMANCE EVALUATION OF TUMOR DETECTION TECHNIQUES ijcsa
Automatic segmentation of tumor plays a vital role in diagnosis and surgical planning. This paper deals
with techniques which providing solution for detecting hepatic tumor in Computed Tomography (CT)
images. The main aim of this work is to analyze performance of tumor detection techniques like Knowledge
Based Constraints, Graph Cut Method and Gradient Vector Flow active contour. These three techniques
are computed using sensitivity, specificity and accuracy. From the evaluated result, knowledge based
constraints method is better than other graph cut method and gradient vector flow active contour.
Aggressive Lung Adenocarcinoma Subtype Prediction Using FDG-PET/CT RadiomicsWookjin Choi
Purpose: To predict the histopathologic subtypes with poor surgery prognosis in early stage lung adenocarcinomas using CT and PET radiomics.
Methods: We retrospectively enrolled 53 patients with stage I lung adenocarcinoma who underwent both diagnostic CT and 18F-fluorodeoxyglucose (FDG) PET/CT before complete surgical resection of the tumors. Tumor segmentation was manually contoured by a physician on both the diagnostic CT and the attenuation CT of PET/CT.A total of 170 radiomics features were extracted on both PET and CT images to design predictive models for two histopathologic endpoints: (1) tumors with solid or micropapillary predominant subtype (aggressiveness), and (2) tumors with micropapillary component more than 5% (MIP5). We used least absolute shrinkage and selection operator (LASSO) as a model building method coupled with a class separability feature selection (CSFS) method. For an unbiased model estimate, a 10-fold cross validation approach was used. The area under the curve (AUC) and prediction accuracy were employed to evaluate the performance of the model. P-values were computed using Wilcoxon rank-sum test.
Results: Of the 53 patients, 9 and 15 had tumors with aggressiveness and MIP5, respectively. For both endpoints, LASSO models with two PET radiomics features achieved the best performance. For aggressiveness, the LASSO model with PET Cluster Shade and PET 2D Variance resulted in 77.6±2.3% accuracy and 0.71±0.02 AUC (P = 0.011). For MIP5, the LASSO model with PET Eccentricity and PET Cluster Shade resulted in 69.6±3.1% accuracy and 0.68±0.04 AUC (P=0.014). The PET Cluster Shade was commonly selected in both models. Cluster shade is a texture feature that measures the skewness of the co-occurrence matrix. Higher PET cluster shade predicted that the tumor was more aggressive and more likely MIP5.
Conclusion: We showed that PET/CT radiomics features can predict tumor aggressiveness.
Funding Support, Disclosures, and Conflict of Interest: This work was supported in part by the National Cancer Institute Grants R01CA172638.
Radiomics Analysis of Pulmonary Nodules in Low Dose CT for Early Detection of...Wookjin Choi
Purpose: To predict the histopathologic subtypes with poor surgery prognosis in early stage lung adenocarcinomas using CT and PET radiomics.
Methods: We retrospectively enrolled 53 patients with stage I lung adenocarcinoma who underwent both diagnostic CT and 18F-fluorodeoxyglucose (FDG) PET/CT before complete surgical resection of the tumors. Tumor segmentation was manually contoured by a physician on both the diagnostic CT and the attenuation CT of PET/CT.A total of 170 radiomics features were extracted on both PET and CT images to design predictive models for two histopathologic endpoints: (1) tumors with solid or micropapillary predominant subtype (aggressiveness), and (2) tumors with micropapillary component more than 5% (MIP5). We used least absolute shrinkage and selection operator (LASSO) as a model building method coupled with a class separability feature selection (CSFS) method. For an unbiased model estimate, a 10-fold cross validation approach was used. The area under the curve (AUC) and prediction accuracy were employed to evaluate the performance of the model. P-values were computed using Wilcoxon rank-sum test.
Results: Of the 53 patients, 9 and 15 had tumors with aggressiveness and MIP5, respectively. For both endpoints, LASSO models with two PET radiomics features achieved the best performance. For aggressiveness, the LASSO model with PET Cluster Shade and PET 2D Variance resulted in 77.6±2.3% accuracy and 0.71±0.02 AUC (P = 0.011). For MIP5, the LASSO model with PET Eccentricity and PET Cluster Shade resulted in 69.6±3.1% accuracy and 0.68±0.04 AUC (P=0.014). The PET Cluster Shade was commonly selected in both models. Cluster shade is a texture feature that measures the skewness of the co-occurrence matrix. Higher PET cluster shade predicted that the tumor was more aggressive and more likely MIP5.
Conclusion: We showed that PET/CT radiomics features can predict tumor aggressiveness.
Funding Support, Disclosures, and Conflict of Interest: This work was supported in part by the National Cancer Institute Grants R01CA172638.
The International Journal of Engineering and Science (The IJES)theijes
This document summarizes a research paper on developing a computer-aided diagnosis system for early detection of liver cancer from CT chest images. The proposed system involves extracting features from segmented liver regions of CT images using techniques like noise removal, segmentation, and morphological operations. Features are then extracted and can be classified using Hidden Markov Models to identify liver cancer cells at an early stage and improve diagnosis. The authors suggest future work to refine cancer cell classification and reduce time complexity for improved diagnosis confidence.
Prediction of lung cancer is most challenging problem due to structure of cancer cell, where most of the cells are overlapped each other. The image processing techniques are mostly used for prediction of lung cancer and also for early detection and treatment to prevent the lung cancer. To predict the lung cancer various features are extracted from the images therefore, pattern recognition based approaches are useful to predict the lung cancer. Here, a comprehensive review for the prediction of lung cancer by previous researcher using image processing techniques is presented. The summary for the prediction of lung cancer by previous researcher using image processing techniques is also presented.
Artificial Intelligence in Radiation OncologyWookjin Choi
The document discusses artificial intelligence applications in radiation oncology. It begins with acknowledgements and then outlines topics including radiomics decision support tools, automatic delineation and variability analysis, and applications like lung cancer screening, tumor response prediction, and aggressive lung adenocarcinoma subtype prediction. Radiomics frameworks and deep learning models are presented. Results show potential for AI to provide quantitative imaging biomarkers and improve outcomes in areas like screening, treatment planning, and response assessment.
IRJET- Image Processing based Lung Tumor Detection System for CT ImagesIRJET Journal
This document presents a method for detecting lung tumors in CT scan images using image processing techniques. The proposed method involves preprocessing images using median filtering for noise removal and contrast adjustment for enhancement. The lungs are then segmented from the images using mathematical morphology. Geometric and textural features are extracted from the segmented region of interest and used as input for an SVM classifier to detect lung cancer. The methodology was tested on a dataset from The Cancer Imaging Archive and was able to successfully detect lung tumors in CT images.
Image-guided liver cancer modeling for computer-aided diagnosis and treatmentAntoine Vacavant
HCC (hepatocellular carcinoma) is the most common primary liver cancer, and the third leading cause of death worldwide. Diagnosis is generally conducted through various medical imaging modalities (ultra-sound, CT, MRI) and, depending on the characterization of HCC (number and size of nodules, early or later staging, etc.), different therapeutic strategies can be delivered to the patient: radiofrequency ablation, liver resection surgery, chemo-embolization, etc. During this talk, I first present a novel ontological approach to represent both HCC detection, staging and treatment into a single information system framework, enabling a complete digital patient follow-up. Since representing numerically liver's geometry is an important concern in such system, I then expose our most recent algorithms devoted to reconstruct the liver volume and inner vessels in 3-D from CT and MRI data. We also see different applications employing the outcomes provided by these tools. (1) The standardized Couinaud liver representation can be calculated thanks to the shape of the vasculature, and permits to locate HCC nodules in a reproducible way. (2) By isolating liver volume, we have proposed to automatically detect HCC tissues within DCE-MRI (dynamic contrast-enhanced MRI) sequences by two approaches: SVM-based classification and adapted U-Net deep learning. (3) We have also studied the numerical simulation of hepatic perfusion by considering finite-element models of the liver and its vessels. This talk finishes by exposing our future prospects in improving our methodologies and combining them for proposing novel computer-aided HCC diagnosis and treatment systems.
Radiomics: Novel Paradigm of Deep Learning for Clinical Decision Support towa...Wookjin Choi
‘Radiomics’ is a novel process to identify ‘radiome’ in the field of imaging informatics when long-term clinical outcomes such as mortality are not immediately available, relying on first acquiring paired gene expression data and medical images at diagnosis from a study cohort, and then leveraging the public gene expression data containing clinical outcomes from a closely matched population into a personalized medicine (Stanford and Harvard University).
Dual energy CT in radiotherapy: Current applications and future outlookWookjin Choi
This document summarizes a review article on the current and future applications of dual energy CT (DECT) in radiation therapy. It describes how DECT can be used to estimate electron density, decompose tissue into effective atomic numbers, and quantify contrast material for improved dose calculations, tissue characterization, and treatment planning. Several clinical applications are discussed, including more accurate dose calculations for brachytherapy and proton therapy, metal artifact reduction, and normal tissue assessment. The document concludes that DECT has the potential to improve accuracy at various stages of the radiotherapy workflow and will likely be used more in the future to provide additional diagnostic information over single energy CT.
Tıp alanında kanserli hücrelerin tespiti @ hasan abdiHassan-k Abdi
This document summarizes a presentation on lung cancer image processing and detection. It discusses several medical imaging technologies and their role in cancer care. It then describes the specific approach used for lung cancer detection, which involves image enhancement techniques like Gabor filtering and fast Fourier transforms. Next, it covers image segmentation using thresholding to divide images into regions. Finally, it discusses features extraction from images to detect lung cancer presence using binarization and masking approaches.
We propose a novel imaging biomarker of lung cancer relapse from 3-D texture analysis of CT images. Three-dimensional morphological nodular tissue properties are described in terms of 3-D Riesz-wavelets. The responses of the latter are aggregated within nodular regions by means of feature covariances, which leverage rich intra- and inter-variations of the feature space dimensions. The obtained Riesz-covariance descriptors lie on a manifold governed by Riemannian geometry requiring specific geodesic metrics to locally approximate scalar products. The latter are used to construct a kernel for support vector machines (SVM). The effectiveness of the presented models is evaluated on a dataset of 92 patients with non-small cell lung carcinoma (NSCLC) and cancer recurrence information. Disease recurrence within a timeframe of 12 months could be predicted with an accuracy above 80, and highlighted the importance of covariance-based texture aggregation. At the end of the talk, computer tools will be presented to easily extract 3D radiomics quantitative features from PET-CT images.
Peter Hamilton on Next generation Imaging and Computer Vision in Pathology: p...Cirdan
Automated image analysis has had a long history but continues to grow with massive improvements in algorithms, speed, performance, and with emerging opportunities for high throughput tissue biomarker analysis and automated decision support for primary diagnostics. Of particular importance is the development of computer vision and image analysis for H&E stained samples. This talk will outline recent advances in automated tissue analysis for biomarker discovery and diagnostics and how adoption of digital pathology will drive the demand for quantitative imaging and decision support.
As an example, PathXL have developed TissueMark for the automated identification and analysis of tumour in lung, colon, breast and prostate cancer digital H&E slides. The conventional pathological estimation of % tumour nuclei in H&E samples shows gross variation between pathologists, undermining the quality of next generation sequencing, molecular testing and patient therapy and potential of false negative diagnoses. TissueMark uses a combination of pattern recognition, glandular analysis and nuclear segmentation to identify premaligant and invasive cancer patterns in H&E stained tissues and use this to assess tumour cell numbers and annotate samples for nucleic acid extraction and molecular profiling. Benchmark data was generated to validate TissueMark technology and showed concordance of automated data with manual counts, accelerating tumour markup and improving sample quality assessment. This represents an example of how automated imaging of tissue samples can be of immense value in quantitative tumour analysis for molecular diagnostics, thereby improving reliability in discovery and diagnostics.
This together with other examples in pathology research and practice will demonstrate that next generation tissue imaging technology in digital pathology could radically change how pathology is practiced.
This document provides an overview of the book "Image-guided Radiotherapy of Lung Cancer". It discusses how image-guided radiotherapy (IGRT) using techniques like PET/CT, 4D-CT, gated radiotherapy, IMRT and proton radiotherapy have introduced a new era for radiotherapy treatment of lung cancer. The book focuses on these novel IGRT approaches and provides recommendations on dose/fractionation, target volume delineation, treatment techniques and normal tissue tolerances. It aims to establish disease stage-specific guidelines to help radiation oncologists incorporate these advanced techniques into clinical practice for improved patient outcomes.
This document describes an Lung Tumor Diagnosis aiding system using Content-Based Image Retrieval techniques. The system aims to rapidly detect lung tumors, even small ones, by segmenting the lung region from CT images, detecting potential tumor candidates, extracting tumor features, and matching cases to retrieve similar diagnoses from a database. The motivation is that lung cancer has a high mortality rate but early detection increases 5-year survival rates. The system overview outlines segmenting the lung, detecting tumors, extracting features, and matching for retrieval. Future work includes completing tumor feature extraction and the matching/retrieval phase.
Presented by Adrien Depeursinge, PhD, at MICCAI 2015 Tutorial on Biomedical Texture Analysis (BTA), Munich, Oct 5 2015.
Texture-based imaging biomarkers complement focal, invasive biopsy based biomarkers by providing information on tissue structure over broad regions, non-invasively, and repeatedly across multiple time points. Texture has been used to predict patient survival, tissue function, disease subtypes and genomics (imagenomics and radiogenomics). Nevertheless, several challenges remain, such as: the lack of an appropriate framework for multi-scale, multi-spectral analysis in 2D and 3D; localization uncertainty of texture operators; validation; and, translation to routine clinical applications.
Automatic Detection of Non-Proliferative Diabetic Retinopathy Using Fundus Im...iosrjce
To diagnosis of Diabetic Retinopathy (DR) it is the prime cause of blindness in the working age
population of the world. Detection method is proposed to detect dark or red lesions such as microaneurysms
and hemorrhages in fundus images.Developed during this work, this first is for collection of lesion data
information and was used by the ophthalmologist in marking images for database while the automatic
diagnosing and displaying the diagnosis result in a more friendly user interface and is as shown in chapter
three of this report. The primary aim of this project is to develop a system that will be able to identify patients
with BDR and PDR from either colour image or grey level image obtained from the retina of the patient. The
algorithm was tested fundus images. The Operating Characteristics (ROC) was determined for red spot lesion
and bleeding, while cross over points were only detected leaving further classification as part of future work
needed to complete this global project. Sensitivity and specificity was calculated for the algorithm is given
respectively as 96.3% and 95.1%
Machine Learning in Pathology Diagnostics with Simagis Livekhvatkov
Simagis Live Digital Pathology platform employs latest generation of visual recognition technology with Deep Learning bring game changing application to pathology cancer diagnostics
CANCER CELL DETECTION USING DIGITAL IMAGE PROCESSINGkajikho9
The document describes a lung cancer detection system using digital image processing. It discusses preprocessing techniques like Gabor filtering and FFT that are applied to enhance images. Segmentation methods like thresholding and marker-controlled watershed are used to segment lung regions. Features are extracted using binarization and masking approaches to detect cancer presence. The system analyzes images and indicates whether cases are normal or abnormal by detecting white masses inside lung regions, helping diagnose lung cancer at early stages.
Cervical Spine Range of Motion Measurement Utilizing Image Analysis - VISAPP2022sugiuralab
This study developed a system to automatically measure cervical spine range of motion (CRoM) angles from cervical spine X-ray images using deep learning. The system used Mask R-CNN for image segmentation and measured angles between vertebrae similarly to manual methods. An evaluation found the average error was 3.5 degrees with a standard deviation of 2.8 degrees, comparable to measurements by residents. However, accuracy was poorer for the C1/C2 vertebrae. Future work will explore improving segmentation and developing computer-aided diagnosis of cervical issues.
Novel Functional Delta-Radiomics for Predicting Overall Survival in Lung Canc...Wookjin Choi
AAPM2023_SU-300-IePD-F6-4
Purpose: Traditional methods of evaluating cardiotoxicity rely on cardiac radiation doses and do not incorporate functional imaging. Cardiac functional imaging can improve the ability to provide early prediction for clinical outcomes for lung cancer patients undergoing radiotherapy. FDG-based PET/CT imaging is routinely obtained for staging and disease assessment after treatment. Although FDG PET/CT scans are typically used to evaluate the tumor, studies have shown that the PET cardiac signal is predictive of clinical outcomes. Our study aimed to develop novel functional cardiac delta radiomics using pre and post-treatment FDG PET/CT scans to predict for overall survival (OS).
Methods: We conducted a study of 109 lung cancer patients who underwent standard FDG-PET/CT scans pre- and post-radiotherapy. Data from ACRIN 6668 (N=70) and an investigator-initiated lung cancer trial (N=39) for functional avoidance radiotherapy were used. The heart was delineated, and 200 cardiac CT and PET functional radiomics features were selected. Delta radiomics was calculated as the change between pre- and post-PET/CT. The data were divided into 80%/20% training/test set, and feature reduction was performed using Wilcoxon test, hierarchical clustering, and recursive feature elimination. A Gradient Boosting Classifier machine learning model evaluated the ability of the delta PET/CT cardiac radiomics to predict for OS using 10-fold cross-validation for training and area-under-the-curve (AUC) for model assessment.
Results: Median survival was 431 days (range 144 to 1640 days). 4 clinically relevant delta features were identified: pre-CT_Maximum, post-CT_Minimum, delta-CT_GLRM_Run_Variance, delta-PET_GLRM_Run_Entropy. The model showed an AUC of 0.91 on the training set and an AUC of 0.87 on the test set.
Conclusion: This is the first study to evaluate functional cardiac delta radiomic features from standard PET/CT scans with data showing good predictive AUC for OS. If validated, this work provides automated methods to provide functional cardiac information for clinical outcome prediction in lung cancer patients.
This document describes an approach to automatically detect tuberculosis from chest radiographs using MATLAB. It involves segmenting the lung region, extracting features from the lung field, and classifying the image as normal or abnormal using a trained classifier. The methodology includes preprocessing steps like filtering and thresholding. Regions of interest are identified and bounding boxes/centroids are calculated. The goal is to develop an automated screening system that can assist radiologists in tuberculosis detection.
Gaussian kernel based anatomically-aided diffuse optical tomography reconstruction. The document introduces a kernel method for diffuse optical tomography (DOT) image reconstruction that uses anatomical guidance without requiring image segmentation. A Gaussian kernel is used to relate absorption coefficients between neighboring nodes based on their features. Simulation results show the kernel method achieves comparable or better image quality than soft-prior methods while being more robust to incorrect priors. Experimental validation using a tissue phantom also shows the kernel method can provide anatomical guidance without segmentation. Future work will investigate applying this method to clinical breast imaging data.
Multiple Analysis of Brain Tumor Detection based on FCMIRJET Journal
This document summarizes a research paper that proposes a method for detecting brain tumors in MRI images using fuzzy c-means clustering. It begins with an introduction to brain tumors and MRI imaging. It then describes the proposed method which includes pre-processing the MRI images, segmenting the images using fuzzy c-means clustering to identify tumor regions, extracting features using fuzzy rules, and analyzing the results to determine tumor size and location. The method is compared to previous work and shown to improve accuracy, precision, and recall in brain tumor detection. In conclusion, preprocessing helps identification, fuzzy c-means segmentation identifies tumor pixels, and the overall method can detect and analyze brain tumors in MRI images.
Heidelberg Retinal Tomography II (HRT II) is a diagnostic imaging technique that uses confocal laser scanning to generate 3D topographic images of the optic disc and retinal nerve fiber layer. It provides quantitative measurements of parameters like cup-to-disc ratio, rim area, and cup shape that are useful for diagnosing and monitoring glaucoma. The HRT II obtains multiple optical sections to build a 3D image with a resolution of 10 micrometers per pixel. It has good test-retest reproducibility and can detect glaucomatous nerve damage earlier than conventional techniques. However, small optic discs continue to present challenges for accurate classification of glaucoma status.
This document discusses factors that impact the image quality in CT scans. It describes key scanning parameters like milliampere, scan time, slice thickness, and reconstruction algorithm that determine image quality. Higher mA and shorter scan times improve image quality but increase radiation dose. Thinner slice thickness and smaller pixel size enhance spatial resolution. The modulation transfer function is used to evaluate a system's ability to resolve fine detail spatially. Selection of these parameters requires balancing optimal image quality with minimizing radiation dose to the patient.
The document discusses a method for classifying brain tumor images using artificial neural networks. It involves three main steps: 1) preprocessing MRI images using morphological operations to remove noise, 2) extracting texture and statistical features using GLCM and GLRLM techniques, and 3) classifying images using a probabilistic neural network (PNN) and measuring accuracy. Features are extracted from 50 brain tumor images and 65 images are tested, achieving a classification accuracy of up to 98%.
This document proposes a computer-aided lung cancer classification system using curvelet features and an ensemble classifier. It first pre-processes CT images using adaptive histogram equalization to improve contrast. Then it segments the images using kernelized fuzzy c-means clustering. Curvelet features are extracted from the segmented regions and an ensemble classifier is applied to classify regions as benign or malignant. The proposed approach achieves reliable and accurate classification results compared to existing methods, with better performance metrics like accuracy, sensitivity and specificity.
Wearable Accelerometer Optimal Positions for Human Motion Recognition(LifeTec...sugiuralab
Wearable Accelerometer Optimal Positions for Human Motion Recognition. The 2020 IEEE 2nd Global Conference on Life Sciences and Technologies (LifeTech 2020), March 10-11, 2020
This document summarizes the work done by an intern during their summer internship in the Medical Physics Department of Radiology. The intern conducted research to predict cancer outcomes based on breast lesion features. Key work included feature extraction from mammograms, analyzing features to differentiate malignant and benign lesions using ROC analysis and LDA, and exploring features to predict invasive vs. non-invasive cancer. Top predictive features were FWHM ROI, diameter, and margin sharpness. The intern gained skills in medical image analysis, statistical analysis, and evaluating results to identify trends.
Automatic System for Detection and Classification of Brain TumorsFatma Sayed Ibrahim
Automatic system for brain tumors detection based on DICOM MRI images
Surveying methodologies of from preprocessing to classifications
Implementing comparative study.
Proposed technique with highest accuracy and lest elapsed time.
Intensity Modulated Radiotherapy (IMRT) - Dr. S. SachinSACHINS700327
This document discusses intensity modulated radiotherapy (IMRT). It begins by outlining some limitations of conventional radiotherapy and 3D conformal radiotherapy, such as inaccurate target delineation and large safety margins. IMRT allows for more conformal dose distributions, tumor dose escalation, and reduced dose to normal tissues. The document then describes various aspects of IMRT planning and delivery, including inverse planning, beam angle and energy selection, dose calculation algorithms, quality assurance procedures, and treatment delivery.
This research optimized parameters for breast cancer imaging with diffuse optical tomography using the NIRFAST software. The forward mesh parameters of node size, depth, and width were optimized to 0.75mm, 60mm, and 60mm, respectively, for a source-detector separation of 15mm. A regularization parameter of λ=1 provided the best reconstruction quality. Having more source-detector pairs improved absorption coefficient reconstruction accuracy. Future work will address frequency-domain measurements and crosstalk between absorption and scattering coefficients.
Using HOG Descriptors on Superpixels for Human Detection of UAV ImageryWai Nwe Tun
This document presents a system for human detection in UAV imagery using HOG descriptors calculated for superpixels extracted via SLIC clustering. The system extracts superpixels, calculates HOG descriptors for each, and classifies the descriptors using an AdaBoost classifier. Evaluation on two datasets shows the superpixel-based approach achieves 3% higher accuracy than the original HOG method while requiring more computational time for training.
CT based Image Guided Radiotherapy - Physics & QASambasivaselli R
This document discusses quality assurance for CT-based image guided radiotherapy. It describes existing technologies like kV CBCT, MV CBCT and XVI imaging. It provides details on the XVI system including its x-ray generator, imaging panel, image acquisition and reconstruction process. The document outlines various quality assurance tests for geometric accuracy, image quality and registration including uniformity, spatial resolution and accuracy tests using phantoms.
This document discusses a hybrid machine learning approach for classifying liver cancer in ultrasound images using MATLAB. It begins with an introduction to image processing and its purposes. Next, it discusses the existing methodology which uses GLCM and NHSVM for feature extraction and classification. The proposed methodology improves upon this by using a hybrid of GLCM, LBP, SVM and KNN for feature extraction and classification. It then describes the various modules involved - input image, preprocessing, segmentation, feature extraction, classification and performance estimation. Finally, it provides a tentative schedule and references. The goal is to improve classification performance by utilizing a hybrid machine learning approach.
This document discusses virtual simulation in radiation therapy. It begins by explaining the goal of localization and optimization of dose distribution during radiation therapy simulation. It then describes the process and limitations of conventional simulation using diagnostic X-rays. The rest of the document focuses on CT-simulation, including its components, features, advantages over conventional simulation like improved visualization in 3D and ability to simulate non-coplanar beams. It also discusses how treatment simulation and planning can be done using digitally reconstructed radiographs on the virtual simulation workstation.
Multi detector ct cerebral angiographyEhab Elftouh
This document discusses techniques for computed tomography (CT) angiography. It covers advances in CT technology that have improved angiography, including faster scan speeds and thinner slices. Optimal CT angiography depends on scan technique factors like protocol and contrast injection, as well as image post-processing techniques. Newer multi-detector CT machines allow covering volumes more quickly and with higher resolution. Methods like multi-planar reformation and volume rendering help visualize vascular structures from CT image data.
This document discusses intensity-modulated radiotherapy (IMRT) and the inverse planning process. It begins by outlining the clinical rationale for IMRT, including dose escalation to improve tumor control and reducing normal tissue complications. It then describes the complex IMRT planning process, which involves patient immobilization, image acquisition, structure segmentation, and treatment optimization. The key aspects of inverse planning are discussed, including the use of an objective function to determine the best achievable dose distribution and corresponding beam weights. Various optimization algorithms are also reviewed. The document concludes that inverse planning provides a customized plan but not necessarily an optimal one, and that treatment optimization is an important part of clinical IMRT implementation.
Similar to Computer aided detection of pulmonary nodules using genetic programming (20)
Deep Learning-based Histological SegmentationDifferentiates Cavitation Patte...Wookjin Choi
Unsupervised segmentation (unlabeled regions of interest, ROIs) and autoencoder (AE)-based classification were used to classify differences in cavitation patterns in knees and digits using the stained images (n=20-30 images/group).
Each image was divided into 256 x 256 pixel patches, and a convolutional neural network (CNN)-based unsupervised segmentation was used to identify ROIs. These patches were subsequently fed into a CNN-based AE whose latent space layer was connected to a classifier for input patch classification.
The AE was trained using the ROIs identified by the unsupervised segmentation, and the image classes were used to train the classifier. Whole image classifications were determined by maximum voting of the patch results and evaluated by accuracy.
Artificial Intelligence To Reduce Radiation-induced Cardiotoxicity In Lung Ca...Wookjin Choi
Traditionally, radiation-induced cardiotoxicity has been studied using cardiac radiation doses rather than functional imaging. We developed artificial intelligence (AI) models based on novel cardiac delta radiomics using pre- and post-treatment FDG-PET/CT scans to predict overall survival in lung cancer patients undergoing radiotherapy. We identified four clinically relevant delta radiomics features with the AI prediction models. The best model achieved an AUC of 0.91 on the training set and 0.87 on the test set. We are a pioneering group in AI for functional cardiac imaging. If validated, this approach will enable to use standard PET/CT scans as functional cardiac imaging with good predictive AUC for OS, as well as provide automated methods to provide functional cardiac information for clinical outcome prediction AI in lung cancer patients.
Novel Functional Radiomics for Prediction of Cardiac PET Avidity in Lung Canc...Wookjin Choi
Purpose/Objective(s)
Traditional methods of evaluating cardiotoxicity focus solely on radiation doses to the heart and do not incorporate functional imaging information. Functional imaging has great potential to improve the ability to provide early prediction for cardiotoxicity for lung cancer patients undergoing radiotherapy. FDG-based PET/CT imaging is routinely obtained as part of standard staging work up for lung cancer patients. Although FDG PET/CT scans are typically used to evaluate the tumor, imaging guidelines note that FDG PET/CT scans are an FDA-approved method to image for cardiac inflammation, and studies have noted that the PET cardiac signal can be predictive of clinical outcomes. The purpose of this work was to develop a radiomics model to predict clinical cardiac assessment of standard of care FDG PET/CT scans.
Materials/Methods
The study included 100 consecutive lung cancer patients treated with radiotherapy who underwent standard pre-treatment FDG-PET/CT staging scans. A clinician reviewed the PET/CT scans per clinical cardiac assessment guidelines and classified the cardiac uptake as: 0 = uniform diffuse, 1 = absent, 2 = heterogeneous, with event rates of 20%, 44%, and 35%, respectively. The heart was delineated and 200 novel functional radiomics features were selected to classify cardiac FDG uptake patterns. We divided the data into an 80% training set and a 20% test set to train and evaluate the classification models. Feature reduction was carried out using the Wilcoxon test (with Bonferroni adjusted p<0.05), hierarchical clustering, and Recursive Feature Elimination. Two automatic machine learning (AutoML) frameworks were used to determine classification models: a Random Forest Classifier (Tree-based Pipeline Optimization Tool, TPOT) and Linear Discriminant Analysis (AutoSklearn). 10-fold cross validation was carried out for training and the accuracy of the ability of the models to predict for clinical cardiac assessment is reported.
Results
Fifty-one independent radiomics features were reduced to 3 clinically pertinent features (PET 2D Skewness, PET Grey Level Co-occurrence Matrix Correlation, and PET Median) using feature reduction techniques. The model selected by TPOT showed 89.8% predictive accuracy in the cross validation of the training set and 85% predictive accuracy on the test set. The model selected by AutoSklearn showed 89.7% predictive accuracy in the cross validation of the training set and 80% predictive accuracy on the test set.
Conclusion
The novelty of this work is that it is the first study to develop and evaluate functional cardiac radiomic features from standard of care FDG PET/CT scans with the data showing good predictive accuracy with clinical imaging evaluation. If validated, the current work provides automated methods to provide functional cardiac information using standard of care imaging that can be used as an imaging biomarker for early clinical toxicity prediction for lung cancer patients.
Novel Deep Learning Segmentation Models for Accurate GTV and OAR Segmentation...Wookjin Choi
Purpose/Objective(s)
MR-guided adaptive radiotherapy (MRgART) improves target coverage and organ-at-risk (OAR) sparing in pancreatic cancer radiation therapy (RT). Inter-fractional changes in patients undergoing RT require time intensive re-delineation of gross tumor volume (GTV) and OARs prior to adaptive optimization. Accurate automatic segmentation has the potential to significantly improve efficiency of the adaptive workflow. We hypothesized that state-of-the-art deep learning (DL) segmentation models could adequately segment GTV and OARs in both planning and daily fractional MR scans.
Materials/Methods
The study included 21 patients with pancreatic cancer treated with MRgART (10 Gy x 5 fractions). The planning MR as well as all daily MR images and registrations were collected (6 image sets per patient and a total of 126 image sets). The planning MR and fraction 1-4 image sets were used as the training set (N = 105), while the test set (N = 21) comprised images for fraction 5, to simulate the last step of incremental learning from planning to final fraction. Evaluated contours included the GTV, Small Bowel, Large Bowel, Duodenum, Left and Right Kidney, Liver, Spinal Cord, and Stomach. To mimic clinical conditions, contour accuracy was evaluated within the ring structure surrounding the PTV, inside of which daily adaptive re-contouring is applied (2 cm expansion in the cradio-caudal direction, 3 cm expansion otherwise). We evaluated three DL model architectures: SegResNet, SegResNet 2D, and SwinUNETR to autosegment GTV and OARs. The segmentation models were trained on the training set using 5-fold cross-validation (CV) and quantitatively analyzed by comparing against clinically used contours with DICE scores. Qualitative analysis was performed by a radiation oncologist using a scoring scale: 1 = perfect, 2 = minor discrepancy, 3 = moderate discrepancy, and 4 = rejected.
Results
Overall, the DL segmentations were in acceptable agreement with clinical contours. The best performing model was the SwinUNETR model with overall training DICE = 0.88±0.06, test DICE = 0.78±0.11, and qualitative score of 1.6±0.8. The agreement between the DL model and clinical segmentation for the GTV was 0.79±0.08, with a qualitative score of 2.2±0.9
Conclusion
We report here the most comprehensive work on DL segmentation for pancreatic cancer MRgART, including quantitative and clinically-pertinent qualitative evaluations of 126 image sets and 3 DL architectures. Our data show good quantitative agreement between DL and clinical contours, and acceptable clinician evaluations for the majority of GTVs and OARs. The current work has great potential to significantly reduce a major bottleneck in the MRgART workflow for pancreatic cancer patients.
CIRDataset: A large-scale Dataset for Clinically-Interpretable lung nodule Ra...Wookjin Choi
The CIRDataset provides a large-scale dataset of 956 annotated lung nodules with segmentations and classifications of spiculations and lobulations, which are important radiomic features for assessing malignancy. It aims to address the lack of publicly available datasets capturing these subtle radiological features typically assessed by radiologists but often smoothed over by deep learning segmentation models. The dataset is accompanied by code, models, and a pipeline to enable the development of AI systems for joint nodule segmentation, classification of spiculations/lobulations, and malignancy prediction using an end-to-end deep learning approach.
Artificial Intelligence in Radiation Oncology.pptxWookjin Choi
The document discusses artificial intelligence applications in radiation oncology, including automatic delineation of organs-at-risk using deep learning models like OARNet. It also discusses radiomics approaches for clinical decision support and outcomes prediction using features extracted from medical images with techniques like spiculation quantification for lung cancer screening.
Artificial Intelligence in Radiation OncologyWookjin Choi
This document discusses artificial intelligence applications in radiation oncology. It begins with acknowledgements and then outlines several AI applications including radiomics tools for lung cancer screening, tumor response prediction, and predicting aggressive lung adenocarcinoma subtypes. It also discusses using AI for automatic tumor delineation and quantification of delineation variability as well as local tumor morphological changes prediction and metabolic tumor volume changes. The document provides details on methods and results for several of these AI applications in radiation oncology.
Artificial Intelligence in Radiation OncologyWookjin Choi
- The document discusses the use of artificial intelligence and radiomics in radiation oncology. It presents frameworks for radiomics analysis involving image registration, tumor segmentation, feature extraction, and predictive modeling.
- Specific applications discussed include using radiomics for lung cancer screening and prediction of tumor response. Radiomics features combined with machine learning models show improved performance over clinical guidelines for assessing lung nodule malignancy.
- Methods are also presented for quantifying tumor characteristics like spiculation through image analysis and extracting interpretable radiomics features. This can provide semantic information to radiologists for assessment.
Automatic motion tracking system for analysis of insect behaviorWookjin Choi
Undergraduate research.
We present a multi-object tracking system to track small insects such as ants and bees. Motion-based object tracking recognizes the movements of objects in videos using information extracted from the given video frames. We applied several computer vision techniques, such as blob detection and appearance matching, to track ants. Moreover, we discussed different object detection methodologies and investigated the various challenges of object detection, such as illumination variations and blob merge/split. The proposed system effectively tracked multiple objects in various environments.
Assessing the Dosimetric Links between Organ-At-Risk Delineation Variability ...Wookjin Choi
Purpose: To determine the relative dosimetric impact of delineation variability (DV) when inter-observer and inter-technique planning variability (PV), and setup variability (SV) with are considered.
Methods: 409 plans for a single head-and-neck patient from the 2017 Radiation Knowledge plan competition were used. Plans were created with Eclipse (N=227), Pinnacle (N=49), RayStation (N=25), Monaco (N=75), and TomoTherapy (N=33) with delivery techniques conventional linac IMRT (N=142), volumetric modulated arc therapy (VMAT, N=234), and helical TomoTherapy (N=33). All plans were optimized using a consistent set of target volumes and a single OAR structure set. Four additional OAR structure sets were contoured by radiation oncologists (N=2) and medical physics residents (N=2) who had completed head-and-neck contouring training. Probabilistic DVHs, dose-volume coverage maps (DVCM), which shows the probability of achieving a dose metric, were computed for each OAR on the following scenarios: SV alone (N=1000), SV+PV (N=1000*409), SV+DV (N=1000*5), SV+PV+DV (total variability [TV], N=1000*409*5). Analysis focused on the probability of exceeding the maximum dose constraint exceeded 5% for each OAR.
Results: The primary source of variability was PV, which was expected due to inter-observer planning abilities and preferences during the optimization planning process, even when all participants utilized the same constraints. The parotid had the most significant interquartile range (IQR) on the PV scenario. Conversely, adding SV, DV, and TV each reduced the IQR, showing a washing out effect on the DVCM.
Conclusion: Assessment of OAR sensitivity to DV will be highly sensitive to the specific planning technique and planner, likely requiring plan-specific assessment of in-tolerance delineation variations. Incorporation SV and DV variabilities in plan assessments washes out their relative impacts on maximum dose.
Simulation of Realistic Organ-At-Risk Delineation Variability in Head and Nec...Wookjin Choi
(Sunday, 7/14/2019) 4:00 PM - 5:00 PM
Room: 225BCD
Purpose: To simulate realistic manual delineation (MD) organ-at-risk (OAR) delineation variability (DV) the purpose of quantifying DV’s dosimetric impact.
Methods: Fourteen independent MD head-and-neck OAR structure sets (SS) were obtained from the ESTRO Falcon group. Seven OARs were available (BrainStem, Esophagus, OralCavity, Parotid_L, Parotid_R, SpinalCord, and Thyroid). A consensus MD SS was generated by the simultaneous truth and performance level estimation (STAPLE) method. MD DV was evaluated with respect to the STAPLE SS using the Dice coefficient and Hausdorff distance (HD) geometric similarity metrics. DVs were simulated using auto-delineation (AD)
methods: an average surface of standard deviation (ASSD) method, GrowCut segmentation, and a random walker (RW) segmentation. Each OAR AD was repeated five times with a different seed or variability level. Dice and HD were computed for each OAR AD with respect to the STAPLE SS. Dosimetric analysis was achieved by intercomparing dose-volume histograms (DVH) from a plan developed with a reference MD SS with DVHs for each MD and AD. DVH confidence bands are reported for MD and each AD method.
Results: The MD Dice was 0.7±0.2 (μ±σ). AD Dice values (ASSD, GrowCut, and RW) were 0.5±0.2, 0.7±0.2, and 0.8±0.1, respectively. HDs were 35.4±45.2, 27.3±19.1, 29.3±19.9, and 14.6±10.3. The simulated DV increased with increasing the seed standard deviations or variability level. The dosimetric effect was largest for MD DVs (larger OAR DVH confidence intervals and larger HD), even though the MD Dice was greater than the ASSD and GrowCut Dice values. GrowCut DV resulted in less dosimetric variation than RW, unlike the geometric indices.
Conclusion: We developed a framework to simulate DVs and demonstrated its feasibility. ADs were able to simulate different magnitudes of DVs, but did not replicate the dosimetric consequences of human delineation variability. The correlation between geometric similarity metrics and dosimetric consequences of DV is poor.
Quantitative image analysis for cancer diagnosis and radiation therapyWookjin Choi
1.Lung Cancer Screening
1.1.Deep learning (feasible but not interpretable)
1.2.Radiomics (concise model)
1.3.Spiculation quantification (interpretable feature)
2.PET/CT Tumor Response
2.1.Aggressive Lung ADC subtype prediction (helpful for surgeons)
2.2.Pathologic response prediction (accurate but not concise)
2.3.Local tumor morphological changes (accurate and interpretable)
Interpretable Spiculation Quantification for Lung Cancer ScreeningWookjin Choi
Spiculations are spikes on the surface of pulmonary nodule and are important predictors of malignancy in lung cancer. In this work, we introduced an interpretable, parameter-free technique for quantifying this critical feature using the area distortion metric from the spherical conformal (angle-preserving) parameterization. The conformal factor in the spherical mapping formulation provides a direct measure of spiculation which can be used to detect spikes and compute spike heights for geometrically-complex spiculations. The use of the area distortion metric from conformal mapping has never been exploited before in this context. Based on the area distortion metric and the spiculation height, we introduced a novel spiculation score. A combination of our spiculation measures was found to be highly correlated (Spearman's rank correlation coefficient ρ = 0.48) with the radiologist's spiculation score. These measures were also used in the radiomics framework to achieve state-of-the-art malignancy prediction accuracy of 88.9% on a publicly available dataset.
Interpretable Spiculation Quantification for Lung Cancer ScreeningWookjin Choi
Spiculations are spikes on the surface of pulmonary nodule and are important predictors of malignancy in lung cancer. In this work, we introduced an interpretable, parameter-free technique for quantifying this critical feature using the area distortion metric from the spherical conformal (angle-preserving) parameterization. The conformal factor in the spherical mapping formulation provides a direct measure of spiculation which can be used to detect spikes and compute spike heights for geometrically-complex spiculations. The use of the area distortion metric from conformal mapping has never been exploited before in this context. Based on the area distortion metric and the spiculation height, we introduced a novel spiculation score. A combination of our spiculation measures was found to be highly correlated (Spearman's rank correlation coefficient ρ = 0.48) with the radiologist's spiculation score. These measures were also used in the radiomics framework to achieve state-of-the-art malignancy prediction accuracy of 88.9% on a publicly available dataset.
Quantitative Image Analysis for Cancer Diagnosis and Radiation TherapyWookjin Choi
1.Lung Cancer Screening
1.1.Deep learning (feasible but not interpretable)
1.2.Radiomics (concise model)
1.3.Spiculation quantification (interpretable feature)
2.PET/CT Tumor Response
2.1.Aggressive Lung ADC subtype prediction (helpful for surgeons)
2.2.Pathologic response prediction (accurate but not concise)
2.3.Local tumor morphological changes (accurate and interpretable)
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
2. Contents
• Introduction
• Lung Segmentation based on 3D Approach
• Nodule Candidates Detection and Feature
Extraction
• Genetic Programming Based Classification
• Experimental Results
• Conclusions
• References
2
3. Introduction
• Pulmonary nodule detection is attractive applications of computer-aided
detection (CAD) because lung cancer is the leading cause of cancer
deaths.
• If lung cancer detected in early phase, the 3-year survival rate is more
than 80%.
• Recently, researchers have developed a number of CAD methods for lung
nodules to aid radiologists in identifying nodule candidates from CT
images.
• Current CT technology allows for near isotropic, submillimeter resolution
acquisition of the complete chest in a single breath hold.
• These thin-slice chest scans have become indispensable in thoracic
radiology, but have also substantially increased the data load for
radiologists.
• Automating the analysis of such data is, therefore, a necessity and this
has created a rapidly developing research area in medical imaging.
3
4. Related Works
• Template matching methods
– Genetic Algorithm Template Matching [10]
– 3D Template Matching [11]
• Model based methods
– Patient-specific models [5]
– Surface normal overlap model [7]
• Machine learning techniques
– Neural network [6]
– Fuzzy c-means clustering [9]
• Digital filtering
– Quantized convergence index filter [8]
– Iris filter [13]
• Statistical analysis [12]
4
7. Lung Segmentation based on 3D Approach
• Select adaptive threshold value at every slice in the CT image
sequence using diagonal intensity histogram [4].
• The CT images are divided into background area(body) and
foreground area(air or lung) as shown below.
7
Original CT image and converted CT image with threshold
8. Lung Segmentation based on 3D Approach
8
• Segment lung region and remove the rim (outer part of the
body).
• Correct the contour of the lung volume (correct excluded wall
side nodule).
Extracted lung region using 3D connected component labeling and
contour corrected lung region (containing wall side nodule)
10. ROI Extraction
10
6-stepped ROI and extracted
nodule candidates
• Adaptive multiple
thresholding method.
– the traditional multiple
thresholding method makes
many steps of grey levels.
– We calculate the adaptive
threshold value using diagonal
histogram at every slice of
lung volume.
– This value is base threshold
value for multiple
thresholding.
– We make additional five
threshold values which are
base threshold + 50, -50, -100,
-150 and -200.
11. Nodule Candidates Detection
• We can remove the vessels and
noise in the lung volume using
rule based classifier.
• Vessel Removing
The vessel is classified by volume
elongation factor and compactness.
– volume is extremely bigger than nodule
– longer than nodule
– not compact object.
• Noise removing
– radius of ROI is smaller than 3mm or
bigger than 30mm.
• Remaining ROIs are nodule candidates
11
6-stepped ROI and extracted
nodule candidates
12. Feature Extraction
• 3D geometric features
– Volume
– elongation factor
– Compactness
– approximated radius.
• 2D pixel based features.
– Use median slice of nodule candidates (area of the median slice is the largest)
– To extract 2D texture feature, we normalize the image size of nodule
candidates.
– 3 types of nodule sizes and then extract the features.
• < 5mm : the size of image matrix is 8x8.
• 5mm ~10mm : the size of image matrix is 16x16.
• > 10 mm : the size of image matrix is 32x32
– extract 14 features from the image matrix.
• mean, variance, skewness, kurtosis, area, radius and 8 biggest eigenvalues.
12
13. Feature Extraction
13
Index Feature
1 Z position
2 Mean
3 Variance
4 Skewness
5 Kurtosis
6 Area
7 Radius
8 Perimeter
9 Compactness
10~17 Largest Eigenvalue 1~8
18 X centroid
19 Y centroid
20 Z centroid
21 Width
22 Height
23 Depth
24 Size
15. Genetic Programming Based Classification
• Genetic Programming (GP)
– an evolutionary optimization technique [14].
• The basic structure of GP is very similar to
Genetic Algorithm(GA).
• The chromosome
– GA : variable (binary digit)
– GP : program (tree or graph)
15
17. Genetic Programming Based Classification
• Our goal of GP evolution is to reduce false positive
(FP) and increase true positive (TP).
• In the proposed scheme, an optimized classifier is
carried out using combination of features and random
constant values.
• GP optimally selects adequate features from all
extracted features and combines the selected features
with mathematical operators.
• The GP generates individual classifiers and those are
evaluated by fitness function.
• The result of GP can convert complex input features to
simple value.
17
18. Genetic Programming Based Classification
• GP chromosome
– The terminal set - The elements of feature vector extracted from
nodule candidate images and randomly generated constants with in
the range 0,1.
– The function set - Four standard arithmetic operator namely plus,
minus, multiply and division and additional mathematical operators
log, exp, abs, sin and cos.(All operators in the function set are
protected to avoid exception)
• GP evolves combination of the terminal set and
function set.
18
19. Genetic Programming Based Classification
• Fitness Function – evaluate every individuals in GP
generation
– True positive rate (TPR)
– Specificity (SPC)
• SPC is the value subtracted from 1 to FPR and also called true negative rate(TNR).
TN FP
SPC FPR
– Area under the ROC curve (Az)
• ROC curve is plotted between TP and FP for different threshold values.
• Az is area under the ROC curve and a good measure of classifier performance in
different condition.
– Fitness Function
19
TP
TPR
TP FN
1 1
TN FP FP TN
f TPR*SPC* Az
20. Genetic Programming Based Classification
Objective To evolve maximum fitness
Selection Generational
Population Size 300
Generation Size 80
Initial Tree Depth Limit 6
Initial population Ramped half and half
GP Operators prob Variable ratio of crossover mutation is used
Sampling Tournament
Survival mechanism Keep the best individuals
Real max. tree level 30
Genetic Programming parameter
20
21. Genetic Programming Based Classification
• Examples of GP
– minus(minus(P(21,:),exp(P(23,:))),minus(mypower(mylog(plus(times(P(14
,:),minus(P(23,:),mypower(mylog(plus(times(P(12,:),minus(P(11,:),mypow
er(P(13,:),P(13,:)))),P(22,:))),P(13,:)))),minus(P(20,:),cos(exp(P(7,:)))))),mypo
wer(exp(P(7,:)),P(7,:))),times(minus(minus(mypower(exp(P(23,:)),P(7,:)),P(
11,:)),times(exp(P(23,:)),P(12,:))),P(11,:))))
– minus(minus(plus(P(4,:),P(7,:)),sin(minus(P(7,:),mypower(P(24,:),plus(min
us(P(7,:),P(11,:)),mypower(P(15,:),P(7,:))))))),mypower(mypower(mypower
(P(24,:),exp(P(11,:))),minus(plus(P(10,:),mypower(plus(minus(P(13,:),sin(e
xp(P(12,:)))),mypower(P(24,:),plus(P(13,:),P(4,:)))),plus(plus(minus(0.35089
,0.35089),P(3,:)),P(7,:)))),P(11,:))),minus(plus(P(10,:),minus(P(4,:),plus(P(10,:
),P(7,:)))),mypower(minus(plus(P(4,:),exp(P(2,:))),minus(P(9,:),P(4,:))),P(11,:
)))))
21
22. Experimental Results
• Lung Image Database Consortium (LIDC) database [15]
– to evaluate the performance of the proposed method.
– The LIDC is developing a publicly available database of thoracic computed
tomography (CT) scans as a medical imaging research resource to promote
the development of computer-aided detection or characterization of
pulmonary Nodules.
– The database is separated into 84 cases, each containing around 100-400
Digital Imaging and Communication (DICOM) images and an XML data file
containing the physician annotations
• We applied our method to 32 scans consisting of 153 nodules
and 7528 slices. The pixel size in the database ranged from 0.65
to 0.75 mm and the reconstruction interval ranged.
• The half of dataset(16 scans) is used for training and another half
of dataset(another 16 scans) is used for testing the classifier.
22
23. Experimental Results
(a) (b)
The result of pulmonary nodule detection: (a) 43rd slice, (b) volume
rendering
23
24. Experimental Results
Data set TPR FPR Az
learn 93.33% 0.127 0.934
test 91.67% 0.138 0.897
all 92.31% 0.133 0.912
24
The results of GP based classifier
26. Conclusion
• We have proposed a novel pulmonary nodule detection algorithm in CT
images.
• Lung region is segmented using adaptive thresholding and voxel labeling
based method.
• Then nodule candidates are detected using adaptive multiple
thresholding and rule based classifier with 3D geometric features.
• Next, 3D and 2D features are extracted from the detected nodule
candidates.
• Finally, the extracted features are optimized and then classified into
nodule and non-nodule using GP.
• We applied proposed algorithm to the LIDC database of NCI.
• This method extremely reduced FP rate.
• The FPs per scan is only 6.5 with more than 90% sensitivity.
• The results show the superiority of the proposed method.
26
27. References
• [1] Ahmedin Jemal, Rebecca Siegel, ElizabethWard, Yongping Hao, Jiaquan Xu, and Michael J
Thun, “Cancerstatistics, 2009,” CA Cancer J Clin, vol. 59, no. 4, pp. 225–49, Jan 2009.
• [2] K-W Jung, Y-J Won, S Park, H-J Kong, J Sung, H-R Shin, E-Cl Park, and J S Lee, “Cancer
statistics in korea: incidence, mortality and survival in 2005,” J Korean Med Sci, vol. 24, no. 6,
pp. 995–1003, Dec 2009.
• [3] Qiang Li, “Recent progress in computer-aided diagnosis of lung nodules on thin-section
ct.,” Comput Med Imaging Graph, vol. 31, no. 4-5, pp. 248–257, 2007.
• [4] S G Armato, M L Giger, C J Moran, J T Blackburn, K Doi, and H MacMahon, “Computerized
detection of pulmonary nodules on ct scans,” Radiographics, vol. 19, no. 5, pp. 1303–11, Jan
1999.
• [5] M Brown, M McNitt-Gray, J Goldin, R Suh, J Sayre, and D Aberle, “Patient-specific models
for lung nodule detection and surveillance in ct images,” IEEE TMI, vol. 20, no. 12, pp. 1242 –
1250, Dec 2001.
• [6] K Suzuki, SG Armato III, F Li, S Sone, and K Doi, “Massive training artificial neural network
(mtann) for reduction of false positives in computerized detection of lung nodules in low-dose
computed tomography,” Medical physics, vol. 30, pp. 1602, 2003.
• [7] D Paik, C Beaulieu, G Rubin, B Acar, R Jeffrey, J Yee, J Dey, and S Napel, “Surface normal
overlap: a computer-aided detection algorithm with application to colonic polyps and lung
nodules in helical ct,” IEEE TMI, vol. 23, no. 6, pp. 661 – 675, Jun 2004.
• [8] Sumiaki Matsumoto, Harold L Kundel, James C Gee, Warren B Gefter, and Hiroto Hatabu,
“Pulmonary nodule detection in ct images with quantized convergence index filter.,” Med
Image Anal, vol. 10, no. 3, pp. 343–352, Jun 2006.
27
28. References
• [9] N Memarian, J Alirezaie, and P Babyn, “Computerized detection of lung nodules with an
enhanced false positive reduction scheme,” IEEE ICIP 2006, pp. 1921 –1924, Sep 2006.
• [10] Jamshid Dehmeshki, Xujiong Ye, Xinyu Lin, Manlio Valdivieso, and Hamdan Amin,
“Automated detection of lung nodules in ct images using shape-based genetic algorithm.,”
Comput Med Imaging Graph, vol. 31, no. 6, pp. 408–417, Sep 2007.
• [11] Onur Osman, Serhat Ozekes, and Osman N Ucan, “Lung nodule diagnosis using 3d
template matching.,” Comput Biol Med, vol. 37, no. 8, pp. 1167–1172, Aug 2007.
• [12] A El-Baz, G Gimel’farb, R Falk, and M Abo El-Ghar, “Automatic analysis of 3d low dose ct
images for early diagnosis of lung cancer,” Pattern Recognition, vol. 42, no. 6, pp. 1041–1051,
Jan 2009.
• [13] JJ Su´arez-Cuenca, PG Tahoces, M Souto, MJ Lado, M Remy-Jardin, J Remy, and J Jos´e
Vidal, “Application of the iris filter for automatic detection of pulmonary nodules on
computed tomography images,” Computers in Biology and Medicine, 2009.
• [14] J Koza, “Genetic programming: On the programming of computers by means of natural
selection,” The MIT Press, Jan 1992.
• [15] S G Armato, G McLennan, M F McNitt-Gray, C R Meyer, D Yankelevitz, D R Aberle, C I
Henschke, E A Hoffman, E A Kazerooni, H MacMahon, A P Reeves, B Y Croft, L P Clarke, and
Lung Image Database Consortium Research Group, “Lung image database consortium:
developing a resource for the medical imaging research community.,” Radiology, vol. 232, no.
3, pp. 739–748, Sep 2004.
28
Pulmonary nodule detection is attractive applications of computer-aided detection (CAD) because lung cancer is the leading cause of cancer deaths in Korea. According to the statistics, the total number of deaths caused by lung cancer is greater than other cancers[1]. The pulmonary nodule detection and diagnosis of lesion in computed tomography (CT) images are important in treatment of lung cancer. If lung cancer detected in early phase, the 3-year survival rate is more than 80%. Recently, researchers have developed a number of CAD methods for lung nodules to aid radiologists in identifying nodule candidates from CT images.
Current CT technology allows for near isotropic, submillimeter resolution acquisition of the complete chest in a single breath hold. These thin-slice chest scans have become indispensable in thoracic radiology, but have also substantially increased the data load for radiologists. Automating the analysis of such data is, therefore, a necessity and this has created a rapidly developing research area in medical imaging. In literature, several nodule detecting methods has been proposed. Multiple gray level thresholding, genetic algorithm template matching (GATM), rule-based linear discriminant analysis, massive training artificial neural network based method, shape-based GATM, and 3D template matching(3DTM) based algorithm are famous among them[2–5].
GATM based algorithm performed quite good results.
Lee et al.[2] proposed a template-matching technique based on GATM for detecting nodules existing within the lung area. Seventy-one nodules out of 98 were correctly etected
with the number of FPs at approximately 1.1 per sectional image.
First of all, the lung region extraction should be performed before any other part of nodule detection.
To extract lung region, we propose a segmentation method based on adaptive thresholding and voxel labelling.
Because lung region is dark, we convert the image to a binary with less than the selected threshold as foreground.
After that, we remove the rim from the binary image at every slice of CT images.
We segment lung region and remove the rim which is outer part of the body.
However, there are many noisy parts likes gas in the intestine.
So, we applied 18-connectedness voxel labelling(3D connected component labelling).
After labelled, we calculate the volumes of the every connected components then select the two largest volumes as lung volume.
In the end, we correct the contour of the lung volume because there may some nodules in wall side of the lung.
To correct this problem, the rolling ball algorithm [4] is applied on every slice of lung volumes.
The red circles in Fig. 1 are wall side nodule and corrected wall side nodule.
The nodule candidates detection and feature extraction are important in nodule detecting scheme.
It consists of the extracting the region of interest(ROI), detecting the nodule candidates and extracting of 3D and 2D features of nodule candidates.
These features are provided as input for GP module.
To extract ROI for nodule candidates detection, we propose a adaptive multiple thresholding method.
In the literature, the multiple thresholding method is commonly used in applications.
However, it is not adaptive and makes many steps of grey levels, we calculate the adaptive threshold value using diagonal histogram at every slice of lung volume.
This value is base threshold value for multiple thresholding and we make additional five threshold values which are base threshold + 50, -50, -100, -150 and -200.
A thresholded lung volume has 6 steps of grey level. Fig. 2a shows a slice of extracted ROI.
In this part, We can remove the vessels and noise in the lung volume using rule based classifier.
We extracted 3D geometric features from every ROI.
The features are volume, elongation factor, compactness and approximated radius.
The vessel is classified by volume elongation factor and compactness.
The vessel is connected every slice so its volume is extremely bigger than nodule.
Moreover, it is longer than nodule and not compact object.
The noise is removed if radius of ROI is smaller than 3mm or bigger than 30mm.
Fig. 3b shows a detected nodule candidates.
Nodule candidates are detected from segmented lung region.
We extracte 3D geometric features and 2D pixel based features.
Four 3D geometric features are already extracted in nodule candidates detection. These are volume, elongation factor, compactness and approximated radius.
The 2D features are extracted from median slice of nodule candidates because area of the median slice is the largest in the nodule candidate volume.
To extract 2D texture feature, we normalize the image size of nodule candidates.
So, we divide into 3 types of nodule sizes and then extract the features.
If the radius of nodule candidate is less than 5mm, the size of image matrix is 8x8.
If the radius of nodule candidate is varied form 5mm to 10mm then the size of image matrix is 16x16.
The largest size of image matrix is 32x32 that is greater than 10mm and less than 20mm.
We extract 14 features from the image matrix.
Those are mean, variance, skewness, kurtosis, area, radius and 8 biggest eigenvalues.
These features are provided as input for GP module
Nodule candidates are detected from segmented lung region.
We extract 3D geometric features and 2D pixel based features.
Four 3D geometric features are already extracted in nodule candidates detection. These are volume, elongation factor, compactness and approximated radius.
The 2D features are extracted from median slice of nodule candidates because area of the median slice is the largest in the nodule candidate volume.
To extract 2D texture feature, we normalize the image size of nodule candidates.
So, we divide into 3 types of nodule sizes and then extract the features.
If the radius of nodule candidate is less than 5mm, the size of image matrix is 8x8.
If the radius of nodule candidate is varied form 5mm to 10mm then the size of image matrix is 16x16.
The largest size of image matrix is 32x32 that is greater than 10mm and less than 20mm.
We extract 14 features from the image matrix.
Those are mean, variance, skewness, kurtosis, area, radius and 8 biggest eigenvalues.
These features are provided as input for GP module
The nodule candidates detection and feature extraction are important in nodule detecting scheme.
It consists of the extracting the region of interest(ROI), detecting the nodule candidates and extracting of 3D and 2D features of nodule candidates.
These features are provided as input for GP module.
The pulmonary nodule detection is a binary classification problem.
In a binary classification problem, the outputs are labelled as positive or negative.
The positive is nodule and the negative is non-nodule.
In pulmonary nodule detection, the almost nodule candidates are truly negative.
So, It makes many false positives.
The goal of GP evolution is to reduce false positive(FP) and increase true positive(TP).
In the proposed scheme, an optimized classifier is carried out using combination of features and random constant values.
It reduced FP while higher TP rate.
The goal of GP evolution is to reduce false positive (FP) and increase true positive (TP).
In the proposed scheme, an optimized classifier is carried out using combination of features and random constant values.
GP optimally selects adequate features from all extracted features and combines the selected features with mathematical operators.
The GP generates individual classifiers and those are evaluated by fitness function.
The result of GP can convert complex input features to simple value.
This value is easily classified into nodule and non-nodule.
ROC curve is plotted between TP and FP for different threshold values.
Az is area under the ROC curve and a good measure of classifier performance in different condition.
If we use only Az as a fitness function, we also achieve good TP and FP.
However, GP can not produce the proper classification threshold.
Therefore, we also use true positive rate (TPR) and specificity (SPC) as parts of fitness function.
We used specificity (SPC) instead of false positive rate(FPR) because it is good at low value but other indicators are good at high value.
SPC is the value subtracted from 1 to FPR and also called true negative rate(TNR).
The fitness function is defined as the product of three indicators.
In GP cycle, the fitness function evaluates the quality of each individual(classifier).
In this work, we used three indicators as a fitness function.
These are area under receiver operating characteristic (ROC) curve (Az), sensitivity and specificity.
GP evolution is controlled by parameters as shown in Table.1.
All parameters are set to the general values.
We used ramped half and half method to generate initial population.
The output of GP is real value.
We need wrapper to simplify the output of GP.
If output of GP is positive, the nodule candidates is classified as nodule otherwise non-nodule.
Finally, if number of generations reaches the maximum limit, GP run is stopped.
The best individual is obtained at the end of GP run.
GP evolution is controlled by parameters as shown in Table.1.
All parameters are set to the general values.
We used ramped half and half method to generate initial population.
The output of GP is real value.
We need wrapper to simplify the output of GP.
If output of GP is positive, the nodule candidates is classified as nodule otherwise non-nodule.
Finally, if number of generations reaches the maximum limit, GP run is stopped.
The best individual is obtained at the end of GP run.
The sensitivity of nodule candidate detection is about 100% and its FP rate is 0.9.
The nodule candidates have many FPs.
The results in Table. 2 show the nodule detection rate(TPR), FPR and Az with respect to three types of datasets.
FP rates of three datasets are about 10% of FP rate without GP.
The ROC curves of the datasets are shown in Fig. 4.
The proposed method achieved 92% detection rate with 6.5 FPs per scan.
We have proposed a novel pulmonary nodule detection algorithm in CT images.
Lung region is segmented using adaptive thresholding and voxel labelling based method.
Then nodule candidates are detected using adaptive multiple thresholding and rule based classifier with 3D geometric features.
Next, 3D and 2D features are extracted from the detected nodule
Candidates.
Finally, the extracted features are optimized and then classified into nodule and non-nodule using GP.
We applied proposed algorithm to the LIDC database of NCI.
This method extremely reduced FP rate.
The FPs per scan is only 6.5 with more than 90% sensitivity.
The results show the superiority of the proposed method.