Automatic liver tumor segmentation would bigly influence liver treatment organizing strategy and follow-up assessment, as a result of organization and joining of full picture information. Right now, develop a totally programmed technique for liver tumor division in CT picture. Introductory liver division comprises of applying a functioning form strategy. In the wake of separating liver applying Super pixel division Algorithm for portioning liver tumor proficiently. In the proposed work, we will investigate these procedures so as to improve division of various segments of the CT pictures. The exploratory outcomes indicated that the proposed strategy was exact for liver tumor division.
PERFORMANCE EVALUATION OF TUMOR DETECTION TECHNIQUES ijcsa
Automatic segmentation of tumor plays a vital role in diagnosis and surgical planning. This paper deals
with techniques which providing solution for detecting hepatic tumor in Computed Tomography (CT)
images. The main aim of this work is to analyze performance of tumor detection techniques like Knowledge
Based Constraints, Graph Cut Method and Gradient Vector Flow active contour. These three techniques
are computed using sensitivity, specificity and accuracy. From the evaluated result, knowledge based
constraints method is better than other graph cut method and gradient vector flow active contour.
A novel approach to jointly address localization and classification of breast...IJECEIAES
Localization of the cancerous region as well as classification of the type of the cancer is highly inter-linked with each other. However, investigation towards existing approaches depicts that these problems are always iindividually solved where there is still a big research gap for a generalized solution towards addressing both the problems. Therefore, the proposed manuscript presents a simple, novel, and less-iterative computational model that jointly address the localization-classification problems taking the case study of early diagnosis of breast cancer. The proposed study harnesses the potential of simple bio-inspired optimization technique in order to obtained better local and global best outcome to confirm the accuracy of the outcome. The study outcome of the proposed system exhibits that proposed system offers higher accuracy and lower response time in contrast with other existing classifiers that are freqently witnessed in existing approaches of classification in medical image process.
Individually Optimized Contrast-Enhanced 4D-CT for Radiotherapy Simulation in...Wookjin Choi
To develop an individually optimized contrast-enhanced (CE) 4D-CT for radiotherapy simulation in pancreatic ductal adenocarcinomas (PDA).
http://scitation.aip.org/content/aapm/journal/medphys/43/6/10.1118/1.4958261
Objective Quality Assessment of Image Enhancement Methods in Digital Mammogra...sipij
Breast cancer is the most common cancer among women worldwide constituting more than 25%
of all cancer incidences occurring in the world [1]. Statistics show that US, India and China
account for more than one third of all breast cancer cases [2]. Also, there has been a steady
increase in the breast cancer incidence among young generation in the world. In India, one out of
two women die after being detected with breast cancer where as in China it is one in four and in
USA it is one in eight [2]. Therefore, the statistics show that cancer mortality is highest in India
among all other nations in the world. In US, though the number of women diagnosed with cancer
is more than that in India, their mortality
Robust Normal Lung CT Texture Features for the Prediction of Radiation-Induce...Wookjin Choi
Abstract
Purpose/Objective(s)
Normal lung CT texture features have been used for the prediction of radiation-induced lung disease (RILD) - pneumonitis and fibrosis. For these features to be clinically useful, they need to be relatively invariant (robust) to tumor size and not correlated with normal lung volume.
Materials/Methods
The free-breathing CTs of 14 lung SBRT patients were studied. Different sizes of GTVs were simulated with spheres (diameters 10 to 60 mm) placed in the lung contralateral to the tumor. Twenty-seven texture features (9 from intensity histogram, 8 from the gray-level co-occurrence matrix [GLCM], and 10 from the gray-level run-length matrix [GLRM]) were extracted from [lung – GTV]. The Bland-Altman method was applied to measure the normalized range of agreement (nRoA) of each texture feature when GTV size varied. A feature was considered as robust when its nRoA was less than that of [lung – GTV] volume (8.8%) and regarded as not correlated when their absolute correlation coefficient was lower than 0.70.
Results
Eighteen texture features were identified as robust. All intensity histogram features were robust except sum and kurtosis. All GLCM features were robust except energy and Haralick's Correlation. Five GLRM features (two run emphasis and three high gray-level emphasis) were robust while the other five (two nonuniformity and three low gray-level emphasis) were nonrobust. Particularly, all three low gray-level emphasis features had extremely large nRoAs (∼30%), indicating huge variations when GTV size changed. None of the robust features was correlated with the normal lung [lung – GTV] volume, suggesting that they can provide additional information. Three nonrobust features (sum and two nonuniformity features) were highly correlated with the normal lung volume. None feature showed statistically significant differences (P < 0.05) with respect to GTV location (upper vs. lower lobe).
Conclusion
We identified 18 robust lung CT texture features which were invariant to varying tumor volumes. Particularly the three GLRM high gray-level emphasis features can characterize the radiologic manifestations of pulmonary abnormalities. Hence these features can be further examined for the prediction of the RILD.
PERFORMANCE EVALUATION OF TUMOR DETECTION TECHNIQUES ijcsa
Automatic segmentation of tumor plays a vital role in diagnosis and surgical planning. This paper deals
with techniques which providing solution for detecting hepatic tumor in Computed Tomography (CT)
images. The main aim of this work is to analyze performance of tumor detection techniques like Knowledge
Based Constraints, Graph Cut Method and Gradient Vector Flow active contour. These three techniques
are computed using sensitivity, specificity and accuracy. From the evaluated result, knowledge based
constraints method is better than other graph cut method and gradient vector flow active contour.
A novel approach to jointly address localization and classification of breast...IJECEIAES
Localization of the cancerous region as well as classification of the type of the cancer is highly inter-linked with each other. However, investigation towards existing approaches depicts that these problems are always iindividually solved where there is still a big research gap for a generalized solution towards addressing both the problems. Therefore, the proposed manuscript presents a simple, novel, and less-iterative computational model that jointly address the localization-classification problems taking the case study of early diagnosis of breast cancer. The proposed study harnesses the potential of simple bio-inspired optimization technique in order to obtained better local and global best outcome to confirm the accuracy of the outcome. The study outcome of the proposed system exhibits that proposed system offers higher accuracy and lower response time in contrast with other existing classifiers that are freqently witnessed in existing approaches of classification in medical image process.
Individually Optimized Contrast-Enhanced 4D-CT for Radiotherapy Simulation in...Wookjin Choi
To develop an individually optimized contrast-enhanced (CE) 4D-CT for radiotherapy simulation in pancreatic ductal adenocarcinomas (PDA).
http://scitation.aip.org/content/aapm/journal/medphys/43/6/10.1118/1.4958261
Objective Quality Assessment of Image Enhancement Methods in Digital Mammogra...sipij
Breast cancer is the most common cancer among women worldwide constituting more than 25%
of all cancer incidences occurring in the world [1]. Statistics show that US, India and China
account for more than one third of all breast cancer cases [2]. Also, there has been a steady
increase in the breast cancer incidence among young generation in the world. In India, one out of
two women die after being detected with breast cancer where as in China it is one in four and in
USA it is one in eight [2]. Therefore, the statistics show that cancer mortality is highest in India
among all other nations in the world. In US, though the number of women diagnosed with cancer
is more than that in India, their mortality
Robust Normal Lung CT Texture Features for the Prediction of Radiation-Induce...Wookjin Choi
Abstract
Purpose/Objective(s)
Normal lung CT texture features have been used for the prediction of radiation-induced lung disease (RILD) - pneumonitis and fibrosis. For these features to be clinically useful, they need to be relatively invariant (robust) to tumor size and not correlated with normal lung volume.
Materials/Methods
The free-breathing CTs of 14 lung SBRT patients were studied. Different sizes of GTVs were simulated with spheres (diameters 10 to 60 mm) placed in the lung contralateral to the tumor. Twenty-seven texture features (9 from intensity histogram, 8 from the gray-level co-occurrence matrix [GLCM], and 10 from the gray-level run-length matrix [GLRM]) were extracted from [lung – GTV]. The Bland-Altman method was applied to measure the normalized range of agreement (nRoA) of each texture feature when GTV size varied. A feature was considered as robust when its nRoA was less than that of [lung – GTV] volume (8.8%) and regarded as not correlated when their absolute correlation coefficient was lower than 0.70.
Results
Eighteen texture features were identified as robust. All intensity histogram features were robust except sum and kurtosis. All GLCM features were robust except energy and Haralick's Correlation. Five GLRM features (two run emphasis and three high gray-level emphasis) were robust while the other five (two nonuniformity and three low gray-level emphasis) were nonrobust. Particularly, all three low gray-level emphasis features had extremely large nRoAs (∼30%), indicating huge variations when GTV size changed. None of the robust features was correlated with the normal lung [lung – GTV] volume, suggesting that they can provide additional information. Three nonrobust features (sum and two nonuniformity features) were highly correlated with the normal lung volume. None feature showed statistically significant differences (P < 0.05) with respect to GTV location (upper vs. lower lobe).
Conclusion
We identified 18 robust lung CT texture features which were invariant to varying tumor volumes. Particularly the three GLRM high gray-level emphasis features can characterize the radiologic manifestations of pulmonary abnormalities. Hence these features can be further examined for the prediction of the RILD.
Individually Optimized Contrast-Enhanced 4D-CT for Radiotherapy Simulation in...Wookjin Choi
Purpose/Objectives: To develop an individually optimized contrast-enhanced (CE) 4D-CT for radiotherapy simulation in pancreatic adenocarcinoma (PDA).
Materials/Methods: Ten PDA patients were enrolled and underwent three CT scans: a 4D-CT immediately following a CE 3D-CT, and an individually optimized CE 4D-CT using a test injection to estimate the peak contrast enhancement time and to optimize the delay time. Three physicians contoured the tumor and pancreatic tissues. We compared image quality scores, tumor volume, motion, image noise, tumor-to-pancreas contrast, and contrast-to- noise ratio (CNR) in the three CTs. We also evaluated inter-observer variations in contouring the tumor using simultaneous truth and performance level estimation (STAPLE).
Results: The average image quality scores for CE 3D-CT and CE 4D-CT were comparable (4.0 and 3.8, p=0.47), and both were significantly better than that for 4D-CT (2.6, p<0.001). The tumor-to- pancreas contrast in CE 3D-CT and CE 4D-CT were comparable (15.5 and 16.7 HU, p=0.71), and the later was significantly higher than that in 4D-CT (9.2 HU, p=0.03). Image noise in CE 3D-CT (12.5 HU) was significantly lower than that in CE 4D-CT (22.1 HU, p<0.001) and 4D-CT (19.4 HU, p=0.005). The CNR in CE 3D-CT and CE 4D-CT were comparable (1.4 and 0.8, p=0.23), and the former was significantly better than that in 4D-CT (0.6, p=0.04). The average tumor volume was smaller in CE 3D-CT (29.8 cm 3 ) and CE 4D-CT (22.8 cm 3 ) than in 4D-CT (42.0 cm 3 ), though the differences were not statistically significant. The tumor motion was comparable in 4D-CT and CE 4D-CT (7.2 and 6.2 mm, p=0.23). The inter-observer variations were comparable in CE 3D-CT and CE 4D-CT (Jaccard index 66.0% and 61.9%), and the former was significantly smaller than that of 4D-CT (55.6%, p=0.047).
Conclusions: The CE 4D-CT demonstrated largely comparable characteristics to the CE 3D-CT. It has high potential for simultaneously delineating the tumor and quantifying the tumor motion with a single scan.
Computer-aided diagnosis system for breast cancer based on the Gabor filter ...IJECEIAES
The most prominent reason for the death of women all over the world is breast cancer. Early detection of cancer helps to lower the death rate. Mammography scans determine breast tumors in the first stage. As the mammograms have slight contrast, thus, it is a blur to the radiologist to recognize micro growths. A computer-aided diagnostic system is a powerful tool for understanding mammograms. Also, the specialist helps determine the presence of the breast lesion and distinguish between the normal area and the mass. In this paper, the Gabor filter is presented as a key step in building a diagnostic system. It is considered a sufficient method to extract the features. That helps us to avoid tumor classification difficulties and false-positive reduction. The linear support vector machine technique is used in this system for results classification. To improve the results, adaptive histogram equalization pre-processing procedure is employed. Mini-MIAS database utilized to evaluate this method. The highest accuracy, sensitivity, and specificity achieved are 98.7%, 98%, 99%, respectively, at the region of interest (30×30). The results have demonstrated the efficacy and accuracy of the proposed method of helping the radiologist on diagnosing breast cancer.
The Advantages of Two Dimensional Techniques (2D) in Pituitary Adenoma TreatmentIOSR Journals
The purpose of the study is to evaluate the two dimensional dose distribution techniques in pituitary adenoma patient treatment in order to provide 2D dose coverage to the target volume while sparing organs at risk (OARs). The CT simulator was used to radiograph 300 patients of pituitary adenomas to conform 2D dose distribution planning inside the tumour bed , and its structures were delineated; including gross target volume (GTV), clinical target volume (CTV), and planning target volume (PTV)], as well as organs at risks (OARs) . Dose distribution analysis was edited to provide 2D dose coverage to the target while sparing organs at risk. The main results of the study were, 2D dose distribution plans increases the unnecessary dose to the critical organs according to their geographical location from the pituitary adenoma site, and the present study , concludes that when the tumour dose increases from 45 to 55 Gy there is a linear proportional increment of dose to the organs at risks, and when the dose is about 60 Gy in 2D, the increment of unnecessary dose to temporal lobe is 0.31 Gy, and to eye is0.34Gy, and to optic chiasm is 0.42 Gy respectively .New techniques, which will lessen the unnecessary dose to OARs, needed to be developed .
Robust Normal Lung CT Texture Features for the Prediction of Radiation-Induce...Wookjin Choi
Abstract
Purpose: Normal lung CT texture features have been used for the prediction of radiation-induced lung disease (RILD) - pneumonitis and fibrosis. For these features to be clinically useful, they should be robust (relatively invariant or unbiased) to tumor size variations and not correlated (non-redundant) with the normal lung volume of interest, i.e., volume of the peri-tumoral region.
Methods: CT images of 14 lung cancer patients were studied. Different sizes of gross tumor volumes (GTVs) were simulated with spheres (diameters 10 to 60 mm) and placed in the lung contralateral to the tumor. 27 texture features [nine from intensity histogram, eight from the gray-level co-occurrence matrix (GLCM) and ten from the gray-level run-length matrix (GLRM)] were extracted from the peri-tumoral region (uniform 30 mm expansion around the GTV in the lung). The Bland-Altman analysis was applied to measure the normalized range of agreement (nRoA) for each feature when GTV size varied. A feature was considered as robust when its nRoA was less than the threshold (100%), which was chosen at the nRoA of the volume of the peri-tumoral region with modification based on the cumulative graph of features vs. nRoA. A feature was regarded as not correlated with the volume of the peri-tumoral region when their correlation was lower than 0.70.
Results: 16 of the 27 texture features were identified as robust. All intensity histogram features were robust except sum and kurtosis. All GLCM features were robust except cluster shade, cluster prominence, and Haralick's Correlation. Five GLRM features (two run emphasis and three high gray-level emphasis) were robust while the other five (two nonuniformity and three low gray-level emphasis) were unrobost. None of the robust features was correlated with the volume of the peri-tumoral region. No feature showed statistically significant differences (P<0.05) on GTV location (upper vs. lower lobe).
Conclusion: We identified 16 robust normal lung CT texture features that can be further examined for the prediction of RILD. Particularly, GLRM high gray-level emphasis features were robust and characterized the radiologic manifestations of pulmonary abnormalities.
Quantitative Image Feature Analysis of Multiphase Liver CT for Hepatocellular...Wookjin Choi
To identify the effective quantitative image features (radiomics features) for prediction of response, survival, recurrence and metastasis of hepatocellular carcinoma (HCC) in radiotherapy.
Identification of Robust Normal Lung CT Texture FeaturesWookjin Choi
Normal lung CT texture features have been used for the prediction of radiation-induced lung disease (radiation pneumonitis and radiation fibrosis). For these features to be clinically useful, they need to be relatively invariant (robust) to tumor size and not correlated with normal lung volume.
http://scitation.aip.org/content/aapm/journal/medphys/43/6/10.1118/1.4955803
Framework for progressive segmentation of chest radiograph for efficient diag...IJECEIAES
Segmentation is one of the most essential steps required to identify the inert object in the chest x-ray. A review with the existing segmentation techniques towards chest x-ray as well as other vital organs was performed. The main objective was to find whether existing system offers accuracy at the cost of recursive and complex operations. The proposed system contributes to introduce a framework that can offer a good balance between computational performance and segmentation performance. Given an input of chest x-ray, the system offers progressive search for similar image on the basis of similarity score with queried image. Region-based shape descriptor is applied for extracting the feature exclusively for identifying the lung region from the thoracic region followed by contour adjustment. The final segmentation outcome shows accurate identification followed by segmentation of apical and costophrenic region of lung. Comparative analysis proved that proposed system offers better segmentation performance in contrast to existing system.
A Comparative Study on the Methods Used for the Detection of Breast Cancerrahulmonikasharma
Among women in the world, the death caused by the Breast cancer has become the leading role. At an initial stage, the tumor in the breast is hard to detect. Manual attempt have proven to be time consuming and inefficient in many cases. Hence there is a need for efficient methods that diagnoses the cancerous cell without human involvement with high accuracy. Mammography is a special case of CT scan which adopts X-ray method with high resolution film. so that it can detect well the tumors in the breast. This paper describes the comparative study of the various data mining methods on the detection of the breast cancer by using image processing techniques.
Robust breathing signal extraction from cone beam CT projections based on ada...Wookjin Choi
Robust breathing signal extraction from cone beam CT projections based on adaptive and global optimization techniques
Ming Chao, Jie Wei, Tianfang Li, Yading Yuan, Kenneth E Rosenzweig and Yeh-Chi Lo
Department of Radiation Oncology, Mount Sinai Medical Center, New York, NY 0029, USA
Department of Computer Science, City College of New York, New York, NY 10031, USA
Department of Radiation Oncology, University of Pittsburgh Medical Center, Pittsburgh, PA 15232, USA
Image segmentation is still an active reason of research, a relevant research area
in computer vision and hundreds of image segmentation techniques have been proposed by
the researchers. All proposed techniques have their own usability and accuracy. In this paper
we are going present a review of some best lung nodule existing detection and segmentation
techniques. Finally, we conclude by focusing one of the best methods that may have high
level accuracy and can be used in detection of lung very small nodules accurately.
Radiomics Analysis of Pulmonary Nodules in Low Dose CT for Early Detection of...Wookjin Choi
Purpose: To predict the histopathologic subtypes with poor surgery prognosis in early stage lung adenocarcinomas using CT and PET radiomics.
Methods: We retrospectively enrolled 53 patients with stage I lung adenocarcinoma who underwent both diagnostic CT and 18F-fluorodeoxyglucose (FDG) PET/CT before complete surgical resection of the tumors. Tumor segmentation was manually contoured by a physician on both the diagnostic CT and the attenuation CT of PET/CT.A total of 170 radiomics features were extracted on both PET and CT images to design predictive models for two histopathologic endpoints: (1) tumors with solid or micropapillary predominant subtype (aggressiveness), and (2) tumors with micropapillary component more than 5% (MIP5). We used least absolute shrinkage and selection operator (LASSO) as a model building method coupled with a class separability feature selection (CSFS) method. For an unbiased model estimate, a 10-fold cross validation approach was used. The area under the curve (AUC) and prediction accuracy were employed to evaluate the performance of the model. P-values were computed using Wilcoxon rank-sum test.
Results: Of the 53 patients, 9 and 15 had tumors with aggressiveness and MIP5, respectively. For both endpoints, LASSO models with two PET radiomics features achieved the best performance. For aggressiveness, the LASSO model with PET Cluster Shade and PET 2D Variance resulted in 77.6±2.3% accuracy and 0.71±0.02 AUC (P = 0.011). For MIP5, the LASSO model with PET Eccentricity and PET Cluster Shade resulted in 69.6±3.1% accuracy and 0.68±0.04 AUC (P=0.014). The PET Cluster Shade was commonly selected in both models. Cluster shade is a texture feature that measures the skewness of the co-occurrence matrix. Higher PET cluster shade predicted that the tumor was more aggressive and more likely MIP5.
Conclusion: We showed that PET/CT radiomics features can predict tumor aggressiveness.
Funding Support, Disclosures, and Conflict of Interest: This work was supported in part by the National Cancer Institute Grants R01CA172638.
Role of Tomosynthesis in Assessing the Size of the Breast LesionApollo Hospitals
To assess the role of 3D tomosynthesis in the evaluation of the size of malignant breast lesions and to compare it with the size in 2D, Ultrasound and final Histopatholgy.
Enhancing Segmentation Approaches from Super Pixel Division Algorithm to Hidd...Christo Ananth
Christo Ananth, S. Amutha, K. Niha, Djabbarov Botirjon Begimovich, “Enhancing Segmentation Approaches from Super Pixel Division Algorithm to Hidden Markov Random Fields with Expectation Maximization (HMRF-EM)”, International Journal of Early Childhood Special Education, Volume 14, Issue 05, 2022,pp. 2400-2410.
Christo Ananth et al. discussed that In surgical planning and cancer treatment, it is crucial to segment and measure a liver tumor's volume accurately. Because it would involve automation, standardisation, and the incorporation of complete volumetric information, accurate automatic liver tumor segmentation would substantially affect the processes for therapy planning and follow-up reporting. Based on the Hidden Markov random field, Automatic liver tumor detection in CT scans is possible using hidden Markov random fields (HMRF-EM). A CT scan of the liver may be too low-resolution for this software. CT liver tissue segmentation is based on the HMRF model. When building an accurate HMRF model, an accurate initial image estimate is crucial. Adaptive K-means clustering is often used for initial estimation. HMRF's performance can be greatly improved by clustering. This project aims to segment liver tissue quickly. This paper proposes an adaptive K-means clustering approach for estimating liver images in the HMRF-EM model. The previous strategy had flaws, so this one fixed them. We compare the current and proposed methods. The proposed method outperforms the currently used method.
PERFORMANCE EVALUATION OF TUMOR DETECTION TECHNIQUES ijcsa
Automatic segmentation of tumor plays a vital role in diagnosis and surgical planning. This paper deals
with techniques which providing solution for detecting hepatic tumor in Computed Tomography (CT)
images. The main aim of this work is to analyze performance of tumor detection techniques like Knowledge
Based Constraints, Graph Cut Method and Gradient Vector Flow active contour. These three techniquesare computed using sensitivity, specificity and accuracy. From the evaluated result, knowledge based constraints method is better than other graph cut method and gradient vector flow active contour.
Individually Optimized Contrast-Enhanced 4D-CT for Radiotherapy Simulation in...Wookjin Choi
Purpose/Objectives: To develop an individually optimized contrast-enhanced (CE) 4D-CT for radiotherapy simulation in pancreatic adenocarcinoma (PDA).
Materials/Methods: Ten PDA patients were enrolled and underwent three CT scans: a 4D-CT immediately following a CE 3D-CT, and an individually optimized CE 4D-CT using a test injection to estimate the peak contrast enhancement time and to optimize the delay time. Three physicians contoured the tumor and pancreatic tissues. We compared image quality scores, tumor volume, motion, image noise, tumor-to-pancreas contrast, and contrast-to- noise ratio (CNR) in the three CTs. We also evaluated inter-observer variations in contouring the tumor using simultaneous truth and performance level estimation (STAPLE).
Results: The average image quality scores for CE 3D-CT and CE 4D-CT were comparable (4.0 and 3.8, p=0.47), and both were significantly better than that for 4D-CT (2.6, p<0.001). The tumor-to- pancreas contrast in CE 3D-CT and CE 4D-CT were comparable (15.5 and 16.7 HU, p=0.71), and the later was significantly higher than that in 4D-CT (9.2 HU, p=0.03). Image noise in CE 3D-CT (12.5 HU) was significantly lower than that in CE 4D-CT (22.1 HU, p<0.001) and 4D-CT (19.4 HU, p=0.005). The CNR in CE 3D-CT and CE 4D-CT were comparable (1.4 and 0.8, p=0.23), and the former was significantly better than that in 4D-CT (0.6, p=0.04). The average tumor volume was smaller in CE 3D-CT (29.8 cm 3 ) and CE 4D-CT (22.8 cm 3 ) than in 4D-CT (42.0 cm 3 ), though the differences were not statistically significant. The tumor motion was comparable in 4D-CT and CE 4D-CT (7.2 and 6.2 mm, p=0.23). The inter-observer variations were comparable in CE 3D-CT and CE 4D-CT (Jaccard index 66.0% and 61.9%), and the former was significantly smaller than that of 4D-CT (55.6%, p=0.047).
Conclusions: The CE 4D-CT demonstrated largely comparable characteristics to the CE 3D-CT. It has high potential for simultaneously delineating the tumor and quantifying the tumor motion with a single scan.
Computer-aided diagnosis system for breast cancer based on the Gabor filter ...IJECEIAES
The most prominent reason for the death of women all over the world is breast cancer. Early detection of cancer helps to lower the death rate. Mammography scans determine breast tumors in the first stage. As the mammograms have slight contrast, thus, it is a blur to the radiologist to recognize micro growths. A computer-aided diagnostic system is a powerful tool for understanding mammograms. Also, the specialist helps determine the presence of the breast lesion and distinguish between the normal area and the mass. In this paper, the Gabor filter is presented as a key step in building a diagnostic system. It is considered a sufficient method to extract the features. That helps us to avoid tumor classification difficulties and false-positive reduction. The linear support vector machine technique is used in this system for results classification. To improve the results, adaptive histogram equalization pre-processing procedure is employed. Mini-MIAS database utilized to evaluate this method. The highest accuracy, sensitivity, and specificity achieved are 98.7%, 98%, 99%, respectively, at the region of interest (30×30). The results have demonstrated the efficacy and accuracy of the proposed method of helping the radiologist on diagnosing breast cancer.
The Advantages of Two Dimensional Techniques (2D) in Pituitary Adenoma TreatmentIOSR Journals
The purpose of the study is to evaluate the two dimensional dose distribution techniques in pituitary adenoma patient treatment in order to provide 2D dose coverage to the target volume while sparing organs at risk (OARs). The CT simulator was used to radiograph 300 patients of pituitary adenomas to conform 2D dose distribution planning inside the tumour bed , and its structures were delineated; including gross target volume (GTV), clinical target volume (CTV), and planning target volume (PTV)], as well as organs at risks (OARs) . Dose distribution analysis was edited to provide 2D dose coverage to the target while sparing organs at risk. The main results of the study were, 2D dose distribution plans increases the unnecessary dose to the critical organs according to their geographical location from the pituitary adenoma site, and the present study , concludes that when the tumour dose increases from 45 to 55 Gy there is a linear proportional increment of dose to the organs at risks, and when the dose is about 60 Gy in 2D, the increment of unnecessary dose to temporal lobe is 0.31 Gy, and to eye is0.34Gy, and to optic chiasm is 0.42 Gy respectively .New techniques, which will lessen the unnecessary dose to OARs, needed to be developed .
Robust Normal Lung CT Texture Features for the Prediction of Radiation-Induce...Wookjin Choi
Abstract
Purpose: Normal lung CT texture features have been used for the prediction of radiation-induced lung disease (RILD) - pneumonitis and fibrosis. For these features to be clinically useful, they should be robust (relatively invariant or unbiased) to tumor size variations and not correlated (non-redundant) with the normal lung volume of interest, i.e., volume of the peri-tumoral region.
Methods: CT images of 14 lung cancer patients were studied. Different sizes of gross tumor volumes (GTVs) were simulated with spheres (diameters 10 to 60 mm) and placed in the lung contralateral to the tumor. 27 texture features [nine from intensity histogram, eight from the gray-level co-occurrence matrix (GLCM) and ten from the gray-level run-length matrix (GLRM)] were extracted from the peri-tumoral region (uniform 30 mm expansion around the GTV in the lung). The Bland-Altman analysis was applied to measure the normalized range of agreement (nRoA) for each feature when GTV size varied. A feature was considered as robust when its nRoA was less than the threshold (100%), which was chosen at the nRoA of the volume of the peri-tumoral region with modification based on the cumulative graph of features vs. nRoA. A feature was regarded as not correlated with the volume of the peri-tumoral region when their correlation was lower than 0.70.
Results: 16 of the 27 texture features were identified as robust. All intensity histogram features were robust except sum and kurtosis. All GLCM features were robust except cluster shade, cluster prominence, and Haralick's Correlation. Five GLRM features (two run emphasis and three high gray-level emphasis) were robust while the other five (two nonuniformity and three low gray-level emphasis) were unrobost. None of the robust features was correlated with the volume of the peri-tumoral region. No feature showed statistically significant differences (P<0.05) on GTV location (upper vs. lower lobe).
Conclusion: We identified 16 robust normal lung CT texture features that can be further examined for the prediction of RILD. Particularly, GLRM high gray-level emphasis features were robust and characterized the radiologic manifestations of pulmonary abnormalities.
Quantitative Image Feature Analysis of Multiphase Liver CT for Hepatocellular...Wookjin Choi
To identify the effective quantitative image features (radiomics features) for prediction of response, survival, recurrence and metastasis of hepatocellular carcinoma (HCC) in radiotherapy.
Identification of Robust Normal Lung CT Texture FeaturesWookjin Choi
Normal lung CT texture features have been used for the prediction of radiation-induced lung disease (radiation pneumonitis and radiation fibrosis). For these features to be clinically useful, they need to be relatively invariant (robust) to tumor size and not correlated with normal lung volume.
http://scitation.aip.org/content/aapm/journal/medphys/43/6/10.1118/1.4955803
Framework for progressive segmentation of chest radiograph for efficient diag...IJECEIAES
Segmentation is one of the most essential steps required to identify the inert object in the chest x-ray. A review with the existing segmentation techniques towards chest x-ray as well as other vital organs was performed. The main objective was to find whether existing system offers accuracy at the cost of recursive and complex operations. The proposed system contributes to introduce a framework that can offer a good balance between computational performance and segmentation performance. Given an input of chest x-ray, the system offers progressive search for similar image on the basis of similarity score with queried image. Region-based shape descriptor is applied for extracting the feature exclusively for identifying the lung region from the thoracic region followed by contour adjustment. The final segmentation outcome shows accurate identification followed by segmentation of apical and costophrenic region of lung. Comparative analysis proved that proposed system offers better segmentation performance in contrast to existing system.
A Comparative Study on the Methods Used for the Detection of Breast Cancerrahulmonikasharma
Among women in the world, the death caused by the Breast cancer has become the leading role. At an initial stage, the tumor in the breast is hard to detect. Manual attempt have proven to be time consuming and inefficient in many cases. Hence there is a need for efficient methods that diagnoses the cancerous cell without human involvement with high accuracy. Mammography is a special case of CT scan which adopts X-ray method with high resolution film. so that it can detect well the tumors in the breast. This paper describes the comparative study of the various data mining methods on the detection of the breast cancer by using image processing techniques.
Robust breathing signal extraction from cone beam CT projections based on ada...Wookjin Choi
Robust breathing signal extraction from cone beam CT projections based on adaptive and global optimization techniques
Ming Chao, Jie Wei, Tianfang Li, Yading Yuan, Kenneth E Rosenzweig and Yeh-Chi Lo
Department of Radiation Oncology, Mount Sinai Medical Center, New York, NY 0029, USA
Department of Computer Science, City College of New York, New York, NY 10031, USA
Department of Radiation Oncology, University of Pittsburgh Medical Center, Pittsburgh, PA 15232, USA
Image segmentation is still an active reason of research, a relevant research area
in computer vision and hundreds of image segmentation techniques have been proposed by
the researchers. All proposed techniques have their own usability and accuracy. In this paper
we are going present a review of some best lung nodule existing detection and segmentation
techniques. Finally, we conclude by focusing one of the best methods that may have high
level accuracy and can be used in detection of lung very small nodules accurately.
Radiomics Analysis of Pulmonary Nodules in Low Dose CT for Early Detection of...Wookjin Choi
Purpose: To predict the histopathologic subtypes with poor surgery prognosis in early stage lung adenocarcinomas using CT and PET radiomics.
Methods: We retrospectively enrolled 53 patients with stage I lung adenocarcinoma who underwent both diagnostic CT and 18F-fluorodeoxyglucose (FDG) PET/CT before complete surgical resection of the tumors. Tumor segmentation was manually contoured by a physician on both the diagnostic CT and the attenuation CT of PET/CT.A total of 170 radiomics features were extracted on both PET and CT images to design predictive models for two histopathologic endpoints: (1) tumors with solid or micropapillary predominant subtype (aggressiveness), and (2) tumors with micropapillary component more than 5% (MIP5). We used least absolute shrinkage and selection operator (LASSO) as a model building method coupled with a class separability feature selection (CSFS) method. For an unbiased model estimate, a 10-fold cross validation approach was used. The area under the curve (AUC) and prediction accuracy were employed to evaluate the performance of the model. P-values were computed using Wilcoxon rank-sum test.
Results: Of the 53 patients, 9 and 15 had tumors with aggressiveness and MIP5, respectively. For both endpoints, LASSO models with two PET radiomics features achieved the best performance. For aggressiveness, the LASSO model with PET Cluster Shade and PET 2D Variance resulted in 77.6±2.3% accuracy and 0.71±0.02 AUC (P = 0.011). For MIP5, the LASSO model with PET Eccentricity and PET Cluster Shade resulted in 69.6±3.1% accuracy and 0.68±0.04 AUC (P=0.014). The PET Cluster Shade was commonly selected in both models. Cluster shade is a texture feature that measures the skewness of the co-occurrence matrix. Higher PET cluster shade predicted that the tumor was more aggressive and more likely MIP5.
Conclusion: We showed that PET/CT radiomics features can predict tumor aggressiveness.
Funding Support, Disclosures, and Conflict of Interest: This work was supported in part by the National Cancer Institute Grants R01CA172638.
Role of Tomosynthesis in Assessing the Size of the Breast LesionApollo Hospitals
To assess the role of 3D tomosynthesis in the evaluation of the size of malignant breast lesions and to compare it with the size in 2D, Ultrasound and final Histopatholgy.
Enhancing Segmentation Approaches from Super Pixel Division Algorithm to Hidd...Christo Ananth
Christo Ananth, S. Amutha, K. Niha, Djabbarov Botirjon Begimovich, “Enhancing Segmentation Approaches from Super Pixel Division Algorithm to Hidden Markov Random Fields with Expectation Maximization (HMRF-EM)”, International Journal of Early Childhood Special Education, Volume 14, Issue 05, 2022,pp. 2400-2410.
Christo Ananth et al. discussed that In surgical planning and cancer treatment, it is crucial to segment and measure a liver tumor's volume accurately. Because it would involve automation, standardisation, and the incorporation of complete volumetric information, accurate automatic liver tumor segmentation would substantially affect the processes for therapy planning and follow-up reporting. Based on the Hidden Markov random field, Automatic liver tumor detection in CT scans is possible using hidden Markov random fields (HMRF-EM). A CT scan of the liver may be too low-resolution for this software. CT liver tissue segmentation is based on the HMRF model. When building an accurate HMRF model, an accurate initial image estimate is crucial. Adaptive K-means clustering is often used for initial estimation. HMRF's performance can be greatly improved by clustering. This project aims to segment liver tissue quickly. This paper proposes an adaptive K-means clustering approach for estimating liver images in the HMRF-EM model. The previous strategy had flaws, so this one fixed them. We compare the current and proposed methods. The proposed method outperforms the currently used method.
PERFORMANCE EVALUATION OF TUMOR DETECTION TECHNIQUES ijcsa
Automatic segmentation of tumor plays a vital role in diagnosis and surgical planning. This paper deals
with techniques which providing solution for detecting hepatic tumor in Computed Tomography (CT)
images. The main aim of this work is to analyze performance of tumor detection techniques like Knowledge
Based Constraints, Graph Cut Method and Gradient Vector Flow active contour. These three techniquesare computed using sensitivity, specificity and accuracy. From the evaluated result, knowledge based constraints method is better than other graph cut method and gradient vector flow active contour.
BATCH NORMALIZED CONVOLUTION NEURAL NETWORK FOR LIVER SEGMENTATIONsipij
With the huge innovative improvement in all lifestyles, it has been important to build up the clinical fields, remembering the finding for which treatment is done; where the fruitful treatment relies upon the preoperative. Models for the preoperative, for example, planning to understand the complex internal structure of the liver and precisely localize the liver surface and its tumors; there are various algorithms proposed to do the automatic liver segmentation. In this paper, we propose a Batch Normalization After All - Convolutional Neural Network (BATA-Convnet) model to segment the liver CT images using Deep Learning Technique. The proposed liver segmentation model consists of four main steps: pre-processing, training the BATA-Convnet, liver segmentation, and the postprocessing step to maximize the result efficiency. Medical Image Computing and Computer Assisted Intervention (MICCAI) dataset and 3DImage Reconstruction for Comparison of Algorithm Database (3D-RCAD) were used in the experimentation and the average results using MICCAI are 0.91% for Dice, 13.44% for VOE, 0.23% for RVD, 0.29mm for ASD, 1.35mm for RMSSD and 0.36mm for MaxASD. The average results using 3DIRCAD dataset are 0.84% for Dice, 13.24% for VOE, 0.16% for RVD, 0.32mm for ASD, 1.17mm for RMSSD and 0.33mm for MaxASD.
Batch Normalized Convolution Neural Network for Liver Segmentationsipij
With the huge innovative improvement in all lifestyles, it has been important to build up the clinical fields, remembering the finding for which treatment is done; where the fruitful treatment relies upon the
preoperative. Models for the preoperative, for example, planning to understand the complex internal structure of the liver and precisely localize the liver surface and its tumors; there are various algorithms proposed to do the automatic liver segmentation. In this paper, we propose a Batch Normalization After All - Convolutional Neural Network (BATA-Convnet) model to segment the liver CT images using Deep
Learning Technique. The proposed liver segmentation model consists of four main steps: pre-processing, training the BATA-Convnet, liver segmentation, and the postprocessing step to maximize the result
efficiency. Medical Image Computing and Computer Assisted Intervention (MICCAI) dataset and 3DImage Reconstruction for Comparison of Algorithm Database (3D-IRCAD) were used in the experimentation and the average results using MICCAI are 0.91% for Dice, 13.44% for VOE, 0.23% for RVD, 0.29mm for ASD, 1.35mm for RMSSD and 0.36mm for MaxASD. The average results using 3DIRCAD dataset are 0.84% for Dice, 13.24% for VOE, 0.16% for RVD, 0.32mm for ASD, 1.17mm for
RMSSD and 0.33mm for MaxASD.
Enhancing Segmentation Approaches from Fuzzy-MPSO Based Liver Tumor Segmentat...Christo Ananth
Christo Ananth, M Kameswari, Densy John Vadakkan, Dr. Niha.K., “Enhancing Segmentation Approaches from Fuzzy-MPSO Based Liver Tumor Segmentation to Gaussian Mixture Model and Expected Maximization”, Journal Of Algebraic Statistics, Volume 13, Issue 2, June 2022,pp. 788-797.
Description:
Christo Ananth et al. discussed that Liver tumor division in restorative pictures has been generally considered as of late, of which the Level set models show an uncommon potential with the advantage of overall optima and functional effectiveness. The Gaussian mixture model (GMM) and Expected Maximization for liver tumor division are introduced. In the early liver division process Level set models are utilized. This proposed strategy uses Gaussian blend models to demonstrate the portioned liver image, and it transforms the division issue into the most significant probability parameter estimation through the use of Expected Maximisation (EM) calculations. The proposed methodology outperformed existing techniques by a significant margin, according to the results of our comparison.
Reinforcing optimization enabled interactive approach for liver tumor extrac...IJECEIAES
Detecting liver abnormalities is a difficult task in radiation planning and treatment. The modern development integrates medical imaging into computer techniques. This advancement has monumental effect on how medical images are interpreted and analyzed. In many circumstances, manual segmentation of liver from computerized tomography (CT) imaging is imperative, and cannot provide satisfactory results. However, there are some difficulties in segmenting the liver due to its uneven shape, fuzzy boundary and complicated structure. This leads to necessity of enabling optimization in interactive segmentation approach. The main objective of reinforcing optimization is to search the optimal threshold and reduce the chance of falling into local optimum with survival of the fittest (SOF) technique. The proposed methodology makes use of pre-processing stage and reinforcing meta heuristics optimization based fuzzy c-means (FCM) for obtaining detailed information about the image. This information gives the optimal threshold value that is used for segmenting the region of interest with minimum user input. Suspicious areas are recognized from the segmented output. Both public and simulated dataset have been taken for experimental purposes. To validate the effectiveness of the proposed strategy, performance criteria such as dice coefficient, mode and user interaction level are taken and compared with stateof-the-art algorithms.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
Automatic whole-body bone scan image segmentation based on constrained local ...journalBEEI
In Indonesia, cancer is very burdensome financially for sufferers as well as for the country. Increasing the access to early detection of cancer can be a solution to prevent the situation from worsening. Regarding the problem of cancer lesion detection, a whole-body bone scan image is the primary modality of nuclear medicine for the detection of cancer lesions on a bone. Therefore, high segmentation accuracy of the whole-body bone scan image is a crucial step in building the shape model of some predefined regions in the bone scan image where metastasis was predicted to appear frequently. In this article, we proposed an automatic whole-body bone scan image segmentation based on constrained local model (CLM). We determine 111 landmark points on the bone scan image as the input for the model building step. The resulting shape and texture model are further used in the fitting step to estimate the landmark points of predefined regions. We use the CLM-based approach using regularized landmark mean-shift (RLMS) to lessen the effect of ambiguity, which was struggled by the CLM-based approach. From the experimental result, we successfully show that our proposed image segmentation system achieves higher performance than the general CLM-based approach.
Breast cancer diagnosis via data mining performance analysis of seven differe...cseij
According to World Health Organization (WHO), breast cancer is the top cancer in women both in the
developed and the developing world. Increased life expectancy, urbanization and adoption of western
lifestyles trigger the occurrence of breast cancer in the developing world. Most cancer events are
diagnosed in the late phases of the illness and so, early detection in order to improve breast cancer
outcome and survival is very crucial.
In this study, it is intended to contribute to the early diagnosis of breast cancer. An analysis on breast
cancer diagnoses for the patients is given. For the purpose, first of all, data about the patients whose
cancers’ have already been diagnosed is gathered and they are arranged, and then whether the other
patients are in trouble with breast cancer is tried to be predicted under cover of those data. Predictions of
the other patients are realized through seven different algorithms and the accuracies of those have been
given. The data about the patients have been taken from UCI Machine Learning Repository thanks to Dr.
William H. Wolberg from the University of Wisconsin Hospitals, Madison. During the prediction process,
RapidMiner 5.0 data mining tool is used to apply data mining with the desired algorithms.
Comparative analysis of edge based and region based active contour using leve...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Similar to ENHANCING SEGMENTATION APPROACHES FROM GAUSSIAN MIXTURE MODEL AND EXPECTED MAXIMIZATION TO SUPER PIXEL DIVISION ALGORITHM (20)
Call for Papers - Journal of Electrical Systems (JES), E-ISSN: 1112-5209, ind...Christo Ananth
At the forefront of technological innovation and scholarly discourse, the Journal of Electrical Systems (JES) is a peer-reviewed publication dedicated to advancing the understanding and application of electrical systems, communication systems and information science. With a commitment to excellence, we provide a platform for researchers, academics, and professionals to contribute to the ever-evolving field of electrical engineering, communication technology and Information Systems.
The mission of JES is to foster the exchange of knowledge and ideas in electrical and communication systems, promoting cutting-edge research and facilitating discussions that drive progress in the field. We aim to be a beacon for those seeking to explore, challenge, and revolutionize the way we harness, distribute, and utilize electrical energy and information systems..
Call for Papers - Utilitas Mathematica, E-ISSN: 0315-3681, indexed in ScopusChristo Ananth
Utilitas Mathematica Journal is a broad scope journal that publishes original research and review articles on all aspects of both pure and applied mathematics. This journal is the official publication of the Utilitas Mathematica Academy, Canada. It enjoys good reputation and popularity at international level in terms of research papers and distribution worldwide. Offers selected original research in Pure and Applied Mathematics and Statistics. UMJ coverage extends to Operations Research, Mathematical Economics, Mathematics Biology and Computer Science. Published in association with the Utilitas Mathematica Academy. The leadership of the Utilitas Mathematica Journal commits to strengthening our professional community by making it more just, equitable, diverse, and inclusive. We affirm that our mission, Promote the Practice and Profession of Statistics, can be realized only by fully embracing justice, equity, diversity, and inclusivity in all of our operations. Individuals embody many traits, so the leadership will work with the members of UMJ to create and sustain responsive, flourishing, and safe environments that support individual needs, stimulate intellectual growth, and promote professional advancement for all
Call for Chapters- Edited Book: Quantum Networks and Their Applications in AI...Christo Ananth
The research on Quantum Networked Artificial Intelligence is at the intersection of Quantum Information Science (QIS), Artificial Intelligence, Soft Computing, Computational Intelligence, Machine Learning, Deep Learning, Optimization, Etc. It Touches On Many Important Parts Of Near-Term Quantum Computing And Noisy Intermediate-Scale Quantum (NISQ) Devices. The research on quantum artificial intelligence is grounded in theories, modelling, and significant studies on hybrid classical-quantum algorithms using classical simulations, IBM Q services, PennyLane, Google Cirq, D-Wave quantum annealer etc. So far, the research on quantum artificial intelligence has given us the building blocks to achieve quantum advantage to solve problems in combinatorial optimization, soft computing, deep learning, and machine learning much faster than traditional classical computing. Solving these problems is important for making quantum computing useful for noise-resistant large-scale applications. This makes it much easier to see the big picture and helps with cutting-edge research across the quantum stack, making it an important part of any QIS effort. Researchers — almost daily — are making advances in the engineering and scientific challenges to create practical quantum networks powered with artificial intelligence
Call for Chapters- Edited Book: Real World Challenges in Quantum Electronics ...Christo Ananth
Most experts would consider this the biggest challenge. Quantum computers are extremely sensitive to noise and errors caused by interactions with their environment. This can cause errors to accumulate and degrade the quality of computation. Developing reliable error correction techniques is therefore essential for building practical quantum computers. While quantum computers have shown impressive performance for some tasks, they are still relatively small compared to classical computers. Scaling up quantum computers to hundreds or thousands of qubits while maintaining high levels of coherence and low error rates remains a major challenge. Developing high-quality quantum hardware, such as qubits and control electronics, is a major challenge. There are many different qubit technologies, each with its own strengths and weaknesses, and developing a scalable, fault-tolerant qubit technology is a major focus of research. Funding agencies, such as government agencies, are rising to the occasion to invest in tackling these quantum computing challenges. Researchers — almost daily — are making advances in the engineering and scientific challenges to create practical quantum computers
Call for Chapters- Edited Book: Real-World Applications of Quantum Computers ...Christo Ananth
It is no surprise that Quantum Computing will prove to be a big change for the world. The practical examples of quantum computing can prove to be a good substitute for traditional computing methods. Quantum computing can be applied to many concepts in today’s era when technology has grown by leaps and bounds. It has a wide beach of applications ranging from Cryptography, Climate Change and Weather Forecasting, Drug Development and Discovery, Financial Modeling, Artificial Intelligence, etc. Giant firms have already begun the process of quantum computing in the field of artificial intelligence. The search algorithms of today are mostly designed according to classical computing methods. While Comparing Quantum Computers with Data Mining with Other Counterpart Systems, we are able to understand its significance thereby applying new techniques to obtain new real-time results and solutions
Call for Papers- Thematic Issue: Food, Drug and Energy Production, PERIÓDICO ...Christo Ananth
Published since 2004, Periódico Tchê Química (PQT) is a is a triannual (published every four months), international, fully peer-reviewed, and open-access Journal that welcomes high-quality theoretically informed publications in the multi and interdisciplinary fields of Chemistry, Biology, Physics, Mathematics, Pharmacy, Medicine, Engineering, Agriculture and Education in Science.
Researchers from all countries are invited to publish on its pages. The Journal is committed to achieving a broad international appeal, attracting contributions, and addressing issues from a range of disciplines. The Periódico Tchê Química is a double-blind peer-review journal dedicated to express views on the covered topics, thereby generating a cross current of ideas on emerging matters
Call for Papers - PROCEEDINGS ON ENGINEERING SCIENCES, P-ISSN-2620-2832, E-IS...Christo Ananth
Proceedings on Engineering Sciences examines new research and development at the engineering. It provides a common forum for both front line engineering as well as pioneering academic research. The journal's multidisciplinary approach draws from such fields as Automation, Automotive engineering, Business, Chemical engineering, Civil engineering, Control and system engineering, Electrical and electronic engineering, Electronics, Environmental engineering, Industrial and manufacturing engineering, Industrial management, Information and communication technology, Management and Accounting, Management and quality studies, Management Science and Operations Research, Materials engineering, Mechanical engineering, Mechanics of Materials, Mining and energy, Safety, Risk, Reliability, and Quality, Software engineering, Surveying and transport, Architecture and urban engineering.
Call for Papers - Onkologia i Radioterapia, P-ISSN-1896-8961, E-ISSN 2449-916...Christo Ananth
Onkologia I Radioterapia is an international peer reviewed journal which publishes on both clinical and pre-clinical research related to cancer. Journal also provide latest information in field of oncology and radiotherapy to both clinical practitioner as well as basic researchers. Submission for publication can be submitted through online submission, Editorial manager system, or through email as attachment to journal office. For any issue, journal office can be contacted through email or phone for instatnt resolution of issue. Onkologia I Radioterapia is a peer-reviewed scopus indexed medical journal publishing original scientific (experimental, clinical, laboratory), review and case studies (case report) in the field of oncology and radiotherapy. In addition, publishes letters to the Editorial Board, reports on scientific conferences, book reviews, as well as announcements about planned congresses and scientific congresses. Oncology and Radiotherapy appear four times a year. All articles published with www.itmedical.pl and www.medicalproject.com.pl is now available on our new website
Call for Papers - Journal of Indian School of Political Economy, E-ISSN 0971-...Christo Ananth
The journal is published every quarter and contains 200 pages in each issue. It is devoted to the study of Indian economy, polity and society. Research papers, review articles, book reviews are published in the journal. All research papers published in the journal are subject to an intensive refereeing process. Each issue of the journal also includes a section on documentation, which reproduces extensive excerpts of relevant reports of committees, working groups, task forces, etc., which may not be readily accessible, official documents compiled from scattered electronic and/or other sources and statistical supplement for ready reference of the readers. It is now in its nineteenth year of publication. So far, five special issues have been brought out, namely: (i) The Scheduled Castes: An Inter-Regional Perspective, (ii) Political Parties and Elections in Indian States : 1990-2003, (iii) Child Labour, (iv) World Trade Organisation Agreements, and (v) Basel-II and Indian Banks
Call for Papers - Journal of Ecohumanism (JoE), ISSN (Print): 2752-6798, ISSN...Christo Ananth
Journal of Ecohumanism is an Open Access international peer-reviewed journal for scholars, researchers, and students who are interested in the fields of Environmental Humanities, Ecohumanism, Ecology, Literary Theory and Cultural Criticism, Economic and Business Studies, Law and Legal Studies in a broad interdisciplinary field of Social Sciences and Humanities. Journal of Ecohumanism is an Open Access peer reviewed journal, allowing users to freely access, download, copy, distribute, print, search, or link to full-text articles for any lawful purpose without requiring permission from the publisher or author. JoE follows a strict double, blind review policy for all the submissions which is embedded in our general publication ethics and supported by rigorous academic scrutiny of papers published. Materials published in the journal do not necessarily represent the views of its editorial board and reviewers
Call for Papers- Journal of Wireless Mobile Networks, Ubiquitous Computing, a...Christo Ananth
JoWUA is an online peer-reviewed journal and aims to provide an international forum for researchers, professionals, and industrial practitioners on all topics related to wireless mobile networks, ubiquitous computing, and their dependable applications. JoWUA consists of high-quality technical manuscripts on advances in the state-of-the-art of wireless mobile networks, ubiquitous computing, and their dependable applications; both theoretical approaches and practical approaches are encouraged to submit. All published articles in JoWUA are freely accessible in this website because it is an open access journal. JoWUA has four issues (March, June, September, December) per year with special issues covering specific research areas by guest editors. The editorial board of JoWUA makes an effort for the increase in the quality of accepted articles compared to other competing journals
Call for Papers - International Journal of Intelligent Systems and Applicatio...Christo Ananth
International Journal of Intelligent Systems and Applications in Engineering (IJISAE) is an international and interdisciplinary journal for both invited and contributed peer reviewed articles that intelligent systems and applications in engineering at all levels. The journal publishes a broad range of papers covering theory and practice in order to facilitate future efforts of individuals and groups involved in the field. IJISAE, a peer-reviewed double-blind refereed journal, publishes original papers featuring innovative and practical technologies related to the design and development of intelligent systems in engineering. Its coverage also includes papers on intelligent systems applications in areas such as nanotechnology, renewable energy, medicine engineering, Aeronautics and Astronautics, mechatronics, industrial manufacturing, bioengineering, agriculture, services, intelligence based automation and appliances, medical robots and robotic rehabilitations, space exploration and etc.
Call for Papers- Special Issue: Recent Trends, Innovations and Sustainable So...Christo Ananth
Energy Systems Modelling is growing in relevance on providing insights and strategies to plan a carbon-neutral future. The implementation of an effective energy transition plan faces multiple challenges, spanning from the integration of the operations of different energy carriers and sectors to the consideration of multiple spatial and temporal resolutions. Demand-side management has to be applied to multi-carrier energy system models lacks; prosumers is explored only in a limited manner; In General, multi-scale modelling frameworks should be established and considered both in the dimensions of time, space, technology and energy carrier; long term energy system models tend to address uncertainty scarcely; there is a lack of studies modelling uncertainties related to emerging technologies and; modelling of energy consumer behaviour is one of the major aspect of future research. The increased pressure in decarbonizing the energy system has renewed the interest in energy system modelling, with several reviews trying to convey a comprehensive description of the utilized methodologies as well as providing new insights on how they can be used to answer new questions
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...Christo Ananth
The Educational Administration: Theory and Practice publishes prominent empirical and conceptual articles focused on timely and critical leadership and policy issues of educational organizations. The journal embraces traditional and emergent research paradigms, methods, and issues. The journal particularly promotes the publication of rigorous and relevant scholarly work that enhances linkages among and utility for educational policy, practice, and research arenas.
The goal of the editorial team and the journal’s editorial board is to promote sound scholarship and a clear and continuing dialogue among scholars and practitioners from a broad spectrum of education. Educational Administration: Theory and Practice presents prominent empirical and conceptual articles focused on timely and critical leadership and policy issues facing educational organizations. As an editorial team, we embrace traditional and emergent theoretical frameworks, research methods, and topics. We particularly promote the publication of rigorous and relevant scholarly work with utility for educational policy, practice, and research.
The journal’s primary focus is on studies of educational leadership, organizations, leadership development, and policy as they relate to elementary and secondary levels of education. Examinations of leadership and policy that fall outside K-12 are considered insofar as there are meaningful connections to the K-12 arena (e.g., college pipeline). International comparative investigations are welcome to the extent they have implications for a broad audience.s.
Bharatiya Shiksha Shodh Patrika is a half yearly refereed UGC care listed journal of Social Sciences Journal of Education. It is a bilingual (Hindi and English) Journal being published regularly since 1982 by Bharatiya Shiksha Shodh Sansthan, Saraswatikunj, Nirala Nagar, Lucknow, Uttar Pradesh, India. Bharatiya Shiksha Shodh Sansthan is an apex Research Institute of Vidya Bharti. The objective of this Journal is to provide an academic forum for Teachers, Teacher Educators, Research Scholars, Policy Makers. Administrators, other Research Workers to encourage original and critical thinking in the field of Education and allied disciplines through presentation of novel ideas, critical appraisal of contemporary educational problems and views and experiences on improved educational practices. However, articles from other disciplines related to contemporary educational issues of relevance may be accepted for publication subject to the approval of the Review Committee.
Call for Papers - African Journal of Biological Sciences, E-ISSN: 2663-2187, ...Christo Ananth
African Journal of Biological Sciences is an International peer-reviewed, Open Access journal that publishes original research articles as well as review articles in all areas of Biological Sciences. It operates a fully open access publishing model which allows open global access to its published content. This model is supported through Article Processing Charges. For more information on Article Processing charges click here. Its scope embraces Animal Sciences, Biochemistry, Bioinformatics, Biotechnology, Botany, Cell Biology, Developmental Biology, Ecology, Environmental Sciences, Ethno Medicine, Food Science, Freshwater Biology, Genetics, Immunology, Marine Biology, Microbiology, Molecular Biology, Physiology, Plant Sciences, Structural Biology,Toxicology,Zoology etc.
It is essential that authors prepare their manuscripts according to established specifications. Failure to follow them may result in papers being delayed or rejected. Therefore, contributors are strongly encouraged to read the author guidelines carefully before preparing a manuscript for submission. The manuscripts should be checked carefully for grammatical, punctuation errors. All papers are subjected to peer review. All articles published in this journal represent the opinion of the authors and not reflect the official policy of the Journal of African Journal of Biological Sciences
Wind Energy Harvesting: Technological Advances and Environmental ImpactsChristo Ananth
Christo Ananth, Rajini K R Karduri, "Wind Energy Harvesting: Technological Advances
and Environmental Impacts", International Journal of Advanced Research in Basic Engineering Sciences and Technology (IJARBEST), Volume 6,Issue 2,February 2020,pp:77-84
Hydrogen Economy: Opportunities and Challenges for a Sustainable FutureChristo Ananth
Christo Ananth, Rajini K R Karduri, "Hydrogen Economy: Opportunities and Challenges
for a Sustainable Future", International Journal of Advanced Research in Basic Engineering Sciences and Technology (IJARBEST), Volume 6,Issue 2,February 2020,pp:69-76
The Economics of Transitioning to Renewable Energy SourcesChristo Ananth
Christo Ananth, Rajini K R Karduri, "The Economics of Transitioning to Renewable Energy Sources", International Journal of Advanced Research in Basic Engineering Sciences and Technology (IJARBEST), Volume 6,Issue 2,February 2020,pp:61-68
An Approach to Detecting Writing Styles Based on Clustering Techniquesambekarshweta25
An Approach to Detecting Writing Styles Based on Clustering Techniques
Authors:
-Devkinandan Jagtap
-Shweta Ambekar
-Harshit Singh
-Nakul Sharma (Assistant Professor)
Institution:
VIIT Pune, India
Abstract:
This paper proposes a system to differentiate between human-generated and AI-generated texts using stylometric analysis. The system analyzes text files and classifies writing styles by employing various clustering algorithms, such as k-means, k-means++, hierarchical, and DBSCAN. The effectiveness of these algorithms is measured using silhouette scores. The system successfully identifies distinct writing styles within documents, demonstrating its potential for plagiarism detection.
Introduction:
Stylometry, the study of linguistic and structural features in texts, is used for tasks like plagiarism detection, genre separation, and author verification. This paper leverages stylometric analysis to identify different writing styles and improve plagiarism detection methods.
Methodology:
The system includes data collection, preprocessing, feature extraction, dimensional reduction, machine learning models for clustering, and performance comparison using silhouette scores. Feature extraction focuses on lexical features, vocabulary richness, and readability scores. The study uses a small dataset of texts from various authors and employs algorithms like k-means, k-means++, hierarchical clustering, and DBSCAN for clustering.
Results:
Experiments show that the system effectively identifies writing styles, with silhouette scores indicating reasonable to strong clustering when k=2. As the number of clusters increases, the silhouette scores decrease, indicating a drop in accuracy. K-means and k-means++ perform similarly, while hierarchical clustering is less optimized.
Conclusion and Future Work:
The system works well for distinguishing writing styles with two clusters but becomes less accurate as the number of clusters increases. Future research could focus on adding more parameters and optimizing the methodology to improve accuracy with higher cluster values. This system can enhance existing plagiarism detection tools, especially in academic settings.
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
Welcome to WIPAC Monthly the magazine brought to you by the LinkedIn Group Water Industry Process Automation & Control.
In this month's edition, along with this month's industry news to celebrate the 13 years since the group was created we have articles including
A case study of the used of Advanced Process Control at the Wastewater Treatment works at Lleida in Spain
A look back on an article on smart wastewater networks in order to see how the industry has measured up in the interim around the adoption of Digital Transformation in the Water Industry.
Understanding Inductive Bias in Machine LearningSUTEJAS
This presentation explores the concept of inductive bias in machine learning. It explains how algorithms come with built-in assumptions and preferences that guide the learning process. You'll learn about the different types of inductive bias and how they can impact the performance and generalizability of machine learning models.
The presentation also covers the positive and negative aspects of inductive bias, along with strategies for mitigating potential drawbacks. We'll explore examples of how bias manifests in algorithms like neural networks and decision trees.
By understanding inductive bias, you can gain valuable insights into how machine learning models work and make informed decisions when building and deploying them.
Final project report on grocery store management system..pdfKamal Acharya
In today’s fast-changing business environment, it’s extremely important to be able to respond to client needs in the most effective and timely manner. If your customers wish to see your business online and have instant access to your products or services.
Online Grocery Store is an e-commerce website, which retails various grocery products. This project allows viewing various products available enables registered users to purchase desired products instantly using Paytm, UPI payment processor (Instant Pay) and also can place order by using Cash on Delivery (Pay Later) option. This project provides an easy access to Administrators and Managers to view orders placed using Pay Later and Instant Pay options.
In order to develop an e-commerce website, a number of Technologies must be studied and understood. These include multi-tiered architecture, server and client-side scripting techniques, implementation technologies, programming language (such as PHP, HTML, CSS, JavaScript) and MySQL relational databases. This is a project with the objective to develop a basic website where a consumer is provided with a shopping cart website and also to know about the technologies used to develop such a website.
This document will discuss each of the underlying technologies to create and implement an e- commerce website.
6th International Conference on Machine Learning & Applications (CMLA 2024)ClaraZara1
6th International Conference on Machine Learning & Applications (CMLA 2024) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of on Machine Learning & Applications.
Forklift Classes Overview by Intella PartsIntella Parts
Discover the different forklift classes and their specific applications. Learn how to choose the right forklift for your needs to ensure safety, efficiency, and compliance in your operations.
For more technical information, visit our website https://intellaparts.com
Literature Review Basics and Understanding Reference Management.pptxDr Ramhari Poudyal
Three-day training on academic research focuses on analytical tools at United Technical College, supported by the University Grant Commission, Nepal. 24-26 May 2024
Student information management system project report ii.pdfKamal Acharya
Our project explains about the student management. This project mainly explains the various actions related to student details. This project shows some ease in adding, editing and deleting the student details. It also provides a less time consuming process for viewing, adding, editing and deleting the marks of the students.
Saudi Arabia stands as a titan in the global energy landscape, renowned for its abundant oil and gas resources. It's the largest exporter of petroleum and holds some of the world's most significant reserves. Let's delve into the top 10 oil and gas projects shaping Saudi Arabia's energy future in 2024.
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressionsVictor Morales
K8sGPT is a tool that analyzes and diagnoses Kubernetes clusters. This presentation was used to share the requirements and dependencies to deploy K8sGPT in a local environment.
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressions
ENHANCING SEGMENTATION APPROACHES FROM GAUSSIAN MIXTURE MODEL AND EXPECTED MAXIMIZATION TO SUPER PIXEL DIVISION ALGORITHM
1. [SYLWAN., 164(4)]. ISI Indexed,Apr 2020 15
ENHANCING SEGMENTATION APPROACHES FROM GAUSSIAN MIXTURE
MODEL AND EXPECTED MAXIMIZATION TO SUPER PIXEL DIVISION
ALGORITHM
Christo Ananth1
, D.R.Denslin Brabin2
1
College of Engineering, AMA International University, Kingdom of Bahrain
2
Computer Science and Engineering, Madanapalle Institute of Technology and Science, Andhra Pradesh,
India
Abstract
Automatic liver tumor segmentation would bigly influence liver treatment
organizing strategy and follow-up assessment, as a result of organization and joining of full
picture information. Right now, develop a totally programmed technique for liver tumor
division in CT picture. Introductory liver division comprises of applying a functioning form
strategy. In the wake of separating liver applying Super pixel division Algorithm for
portioning liver tumor proficiently. In the proposed work, we will investigate these
procedures so as to improve division of various segments of the CT pictures. The
exploratory outcomes indicated that the proposed strategy was exact for liver tumor
division.
Keywords: Active pixel, Gaussian mixture model, Cellular Neural Networks, Level set,
Expectation-maximization algorithm.
I. INTRODUCTION
As indicated by the World Health Organization, liver malignancy was the second most
basic reason for disease instigated passings. Hepatocellular carcinoma (HCC) is the most
widely recognized sort of essential liver malignancy which is the 6th most pervasive
disease [20]. What's more, the liver is likewise a typical site for optional tumors. Liver
treatment arranging strategies would benefit from an exact and quick injury division that
takes into consideration resulting assurance of volume-and surface based data. Also, having
an institutionalized and programmed division technique would encourage a progressively
solid treatment reaction grouping. Liver tumors show a high changeability in their shape,
2. [SYLWAN., 164(4)]. ISI Indexed,Apr 2020 16
appearance and restriction. They can be either hypodense (seeming darker than the
encompassing solid liver parenchyma) or hyperdense (seeming more brilliant), and can also
have an edge because of the difference operator aggregation, calcification or corruption.
The individual appearance relies upon injury type, state, imaging (hardware, settings,
differentiate technique and timing), and can fluctuate significantly from patient to tolerant.
This high changeability makes liver sore division a difficult undertaking practically
speaking.
Liver malignant growth is one of the diseases which has the most elevated death rate on
the planet. Every year, the dismalness and mortality of liver disease increment consistently.
Processed tomography (CT) is the generally utilized imaging technique for screening,
diagnosing, arranging, and even guess appraisal of liver malignancy. Division of liver
injuries can assist doctors with diagnosing malignant growth and make proper treatment
alternatives with increasingly advantageous, and can likewise rapidly survey the adequacy
of careful treatment. In any case, physically estimating the tumor from an enormous
number of CT cuts is very tedious and depends on the experience of the doctor, which is a
lot of emotional and helpless to impedance from information contrasts among various
doctors. Along these lines, it is exceptionally encouraging to plan a goal and exact plan for
programmed liver tumor division to assist doctors with bettering decipher CT pictures. In
any case, in light of the fact that the number, area, size and state of liver tumors are
essentially extraordinary among the patients, also, the tumor limit is constantly obscured
and the complexity among tumor and its encompassing tissues is low, precise division of
liver tumor is as yet a troublesome assignment.
So as to address this issue, specialists have put resources into this investigation and
proposed numerous strategies. During the previous barely any decades, they for the most
part centered around creating calculations, for example, level set, watershed, factual shape
model, district developing, dynamic form model, edge handling, chart cuts and customary
AI techniques that require physically extricate tumor highlights. There are numerous
famous superpixel approaches, for example, standardized cut [23], SLIC [24], LSC [25],
ERS [26], SEEDS [27], mean move [28], Turbo-pixel [29], graphcuts [30], and
3. [SYLWAN., 164(4)]. ISI Indexed,Apr 2020 17
pseudoboolean improvement (PB) [31. Every calculation has its own bit of leeway and
inconvenience for superpixel division, notwithstanding, it is still testing to build up a high
caliber and ongoing superpixel calculation that displays the properties including great limit
adherence, smaller requirements, normal shapes and low computational intricacy. So as to
fulfill these ideal prerequisites, we propose a constant superpixel division calculation by
Density-Based Spatial Clustering of Applications with Noise (DBSCAN) [32] to
accomplish preferred execution over best in class techniques. This is the first occasion
when that the DBSCAN grouping calculation is applied to superpixel division. An ideal
superpixel strategy needs to not just satisfy the prerequisite of good limit adherence, yet in
addition be proficient. Since the superpixels are utilized as a preprocessing step in vision
applications, the calculation of great superpixels with less calculation is liked. Right now,
propose another continuous superpixel technique utilizing DBSCAN [32] bunching, which
acquires the upsides of existing superpixel calculations, (for example, SLIC) and further
conveys forward past these favorable circumstances. Our DBSCAN superpixel division
calculation isn't just more productive yet additionally more precise than past superpixel
calculations.
II. RELATED WORK
For instance, Zhou et al. [1] proposed a bound together level set strategy (LSM) for
liver tumor division. They utilized neighborhood pixel force bunching joined with shrouded
Markov arbitrary field to develop a bound together LSM. At that point, local data and edge
data were utilized to secure the tumor form, with the goal that the issue of edge spillage can
be explained. Yan et al. [2] proposed a self-loader division strategy dependent on watershed
change. They previously put seed focuses in the tumor territory physically as markers, at
that point watershed change was performed to delineate and remove the tumor form in the
picture. Along these lines, the thickness data of tumor can be procured as edge to isolate
liver injury from its local tissues. At that point, refine the edge from the division sores for
precise outcomes. Massoptier et al. [3] proposed a programmed liver tumor division
calculation dependent on factual shape model, which utilized the dynamic form method of
4. [SYLWAN., 164(4)]. ISI Indexed,Apr 2020 18
inclination vector stream to get smooth and regular liver tumor division results without the
need of cooperation. Wong et al. [4] proposed a self-loader tumor division strategy
dependent on area developing technique. They first sketch the locale of enthusiasm of
tumor physically and ascertain the seed point and highlight vector in it, at that point
territorial developing calculation was performed to check the tumor voxels. Fusing
information based imperatives into the developing technique guarantees the portioned
tumor size and shape are inside a sensible range. Yim et al. [5] proposed a division strategy
dependent on dynamic form model, which acknowledged liver sore division by substituting
manual introduction and circular instatement of the dynamic shape. Through this technique,
they at last got a decent division result. Park et al. [6] proposed a factual ideal thresholding
strategy for tumor division. They use strategies as most extreme back choice and paired
morphological sifting to section the liver, at that point utilizing calculation, for example,
blended likelihood thickness and least absolute likelihood mistake to compute the ideal
edge. At long last, the division of liver tumor is accomplished by playing out the
determined edge. Linguraru et al. [7] proposed a technique dependent on chart cut
improvement. They utilized the shape parameterization technique to recognize the liver
movement forms, which can address the accompanying anomalous tumor division
circumstance. The tumor division is then acknowledged by shape obliged based diagram
cut calculation. As of late, conventional AI based picture division techniques have assumed
a functioning job in liver tumor division situations. The vast majority of these techniques
need to physically plan tumor include extraction strategies, and building up a model to
prepared the highlights, making the model can recognize tumor pixels. Huang et al. [8]
proposed a liver tumor division strategy dependent on extraordinary learning machine
(ELM), which groups tumor and non-tumor voxels via preparing classifiers in stochastic
element subspace sets. The ELM is chosen as the essential classifier for model preparing.
To additionally improve the division execution, casting a ballot instrument is acquainted
with choose the last order aftereffects of these essential classifiers. Zhou et al. [9] proposed
bolster vector machine (SVM) based technique for tumor division. They initially prepared a
SVM model to section tumor district from a solitary cut and extricated its form through
morphological activity. At that point, the shape is anticipated to the neighboring cuts for
5. [SYLWAN., 164(4)]. ISI Indexed,Apr 2020 19
next resampling, learning, and grouping. Rehashing this procedure until all cuts are
prepared. Most AI based techniques can accomplish preferable execution over conventional
ones, yet they are as yet hard to become familiar with the exact tumor highlight and
powerless to information changes.
As of late, profound learning has entered into an assortment of uses and outperformed
the cutting edge execution in numerous fields, for example, picture identification,
arrangement, and division [10–15], which likewise energizes us to utilize this procedure in
the liver tumor division task. Numerous specialists have just utilized profound learning
strategies to investigate the errand of liver tumor division. Li et al. [16] proposed a H-
DenseUNet which comprises of 2D UNet and 3D UNet for liver tumor division. The 2D
UNet is utilized to separate tumor includes in a solitary cut while the 3D UNet is intended
to gain proficiency with the spatial data of tumor between cuts. By structuring a blended
element combination layer to mutually improve the element portrayal among intra-and
between cut, they at long last procured an acceptable outcome in the liver tumor division
challenge. Christ et al. [17] proposed a strategy dependent on falling two completely
convolutional neural systems (FCN). They originally prepared a FCN to portion the liver
from the stomach pictures and utilized the fragmented liver locale as contribution of the
subsequent division organize, at that point the tumor division results can be gained
constantly FCN. At last, they utilized 3D contingent arbitrary field to improve the tumor
division results. Sun [18] et al. proposed a multi-channel completely convolutional organize
(MC-FCN) based liver tumor division technique. They planned a MC-FCN to prepare
differentiate improved CT pictures at various imaging stage because of each period of the
information gives one of a kind data about the obsessive highlights of tumor. In particular,
they performed organize preparing for each stage information in a solitary channel, and
afterward blended the significant level component layer in various channels to
acknowledge multi-channel preparing procedure. Exploratory outcomes showed the
planned system can catch increasingly far reaching tumor includes and improve the model
execution in some degree. Yuan et al. [19] built up a progressive structure of profound
convolution-deconvolution neural system (CDNN) for division of liver tumors. They
6. [SYLWAN., 164(4)]. ISI Indexed,Apr 2020 20
initially prepared the CDNN model to fragment the liver locale from whole stomach CT
succession, at that point they performed histogram leveling upgrade on the liver area, and
view it as contribution of the following CDNN for tumor division. They supplanted the
misfortune work by utilizing Jaccard separation during the preparation procedure to take
out the example reweighting impact and accomplished an exceptional division results.
The staying of this article is sorted out as follows. In Section 3, we portray the
subtleties of the proposed structure. The outcomes are introduced in Section 4, and
finishing up in Section 5.
III.PROPOSED METHOD
a. PRE-PROCESSING
CT images usually contain many noises so we need to do smoothing for the image which
makes the intensity distribution of liver to be smoothed too, so pre-processing step is
necessary for accurate liver segmentation. The morphological operation is takes place to
extract the mask image from binary Image.
b. LIVER SEGMENTATION
ACTIVE CONTOUR
The Active Contour system is especially designed for detecting blood vessels on
coronary angiogram images for medical applications. The environment where these blood
vessel structures are found contains materials such as segmented tissues and fluid debris.
These become the unwanted background information that hinders the performance of the
active contour in tracking the object outline accurately. In addition, angiogram images also
contribute various obstructions that reduce the performance of the active contour such as
color, noise, low contrast and poor quality images. Therefore, the Active Contour System
developed in this chapter aims at optimizing the performance of object tracking by
introducing various image processing techniques. An parametric active contour is simply a
7. [SYLWAN., 164(4)]. ISI Indexed,Apr 2020 21
set of contour points [X(s),Y(s)] parameterized by s ∈[0, 1]. Typically, parametric active
contours are implemented by finding the contour that minimizes (1)
E = Einternal + Eexternal, (1)
where Einternal is the internal energy of the active contour depicted in (2)
Einternal=
ଵ
ଶ
{ߙ(
ௗ(௦ሻ
ௗ௦
ሻଶ
+ (
ௗ(௦ሻ
ௗ௦
ሻଶ
+ ߚ ൜[ቀ
ௗమ(௦ሻ
ௗ௦మ ቁ + ቀ
ௗమ(௦ሻ
ௗ௦మ ቁ
ଶ
]ൠ ݀}ݏ
ଵ
(2)
Here, α and β are two nonnegative weighting parameters expressing respectively the degree
of the resistance to stretching and bending of the contour. The external energy Eexternal is
typically defined such that the contour seeks the edges in the image, (3),
Eexternal=- ݂(ܺ(ݏሻ, ܻ(ݏሻሻ݀ݏ
ଵ
where f(x,y)=|∇,ݔ(ܫ ݕሻ|ଶ
(3)
Active contours can be characterized as the procedure to acquire deformable models or
structures with limitations and powers in a picture for division. Shape models portray the
article limits or some other highlights of the picture to frame a parametric bend or form.
Ebb and flow of the models is resolved with different shape calculations utilizing outer and
inside powers applied. Vitality practical is constantly connected with the bend characterized
in the picture. Outer vitality is characterized as the blend of powers because of the picture
which is explicitly used to control the situating of the form onto the picture and inner
vitality, to control the deformable changes [3]. Limitations for a specific picture in the form
division rely upon the necessities. The ideal form is acquired by characterizing the base of
the vitality useful. Misshaping of the form is portrayed by an assortment of focuses that
finds a shape. This form fits the necessary picture shape characterized by limiting the
vitality practical.
c. TUMOR SEGMENTATION
SUPERPIXEL SEGMENTATION
Superpixel is a group of pixels in proximity that has similar intensity. Simple Linear
Iterative Clustering (SLIC) algorithm [20] is applied due to its fast computational time [21],
8. [SYLWAN., 164(4)]. ISI Indexed,Apr 2020 22
[22]. The size of original superpixels extracted from SLIC is different as there might be a
small number of pixels near each other with the similar pixel value in some of the region of
the image(most of the time in tumor region), while the in non-tumor part of the image their
size will be large. However, we need the same size of superpixels in order to apply PCA.
This problem is solved as follows:
We computed average size of the superpixel as shown in equation (4)
M=
ଵ
ே
∑ ݊
ே
ୀଵ (4)
Where N is the number of superpixels, ݊ is number of pixels in i th superpixel, M is
average number of Then, the size of each superpixel is made same as of the average one by
padding some pixel value to the smaller size superpixel and removing some intensity value
from the large size superpixels. Instead of appending random intensity values to smaller
size superpixels, we pad by repeating the last pixels value of the superpixel itself. Finally,
the superpixel matrix is generated. Compute average superpixel. Determine the covariance
of superpixels. Calculate the eigenvectors and eigenvalues of the covariance matrix. The
superpixels onto Principal components that contain most variance of the data. Here, the
number of Principal components is same as the number of superpixels. As stated, the
eigenvectors or Principal components that contain at least 95% variance of superpixels can
represent the whole image by confidence. This reduces the dimensional space as most of
the information is contained in the first two or three largest eigenvalues. In our
implementation, 95% variance of superpixels was contained in the top two principal
components for most of the images. Calculate the distance of each superpixel in respect to
average superpixel. While computing distance, we should consider the distribution of
superpixels in the principal component coordinate system. To incorporate this concept we
computed the distance along the principal components.
DBSCAN CLUSTERING
Clustering analysis or simply Clustering is basically an Unsupervised learning
method that divides the data points into a number of specific batches or groups, such that
9. [SYLWAN., 164(4)]. ISI Indexed,Apr 2020 23
the data points in the same groups have similar properties and data points in different
groups have different properties in some sense. It comprises of many different methods
based on different evolution. Fundamentally, all clustering methods use the same approach
i.e. first we calculate similarities and then we use it to cluster the data points into groups or
batches. Here we will focus on Density-based spatial clustering of applications with
noise (DBSCAN) clustering method.
Clusters are dense regions in the data space, separated by regions of the lower density of
points. The DBSCAN algorithm is based on this intuitive notion of “clusters” and “noise”.
The key idea is that for each point of a cluster, the neighborhood of a given radius has to
contain at least a minimum number of points. Partitioning methods (K-means, PAM
clustering) and hierarchical clustering work for finding spherical-shaped clusters or convex
clusters. In other words, they are suitable only for compact and well-separated clusters.
Moreover, they are also severely affected by the presence of noise and outliers in the data.
DBSCAN algorithm requires two parameters depicted in fig(1) , eps : It defines the
neighborhood around a data point i.e. if the distance between two points is lower or equal to
‘eps’ then they are considered as neighbors. If the eps value is chosen too small then large
part of the data will be considered as outliers. If it is chosen very large then the clusters will
merge and majority of the data points will be in the same clusters. One way to find the eps
value is based on the k-distance graph. MinPts: Minimum number of neighbors (data
points) within eps radius. Larger the dataset, the larger value of MinPts must be chosen. As
a general rule, the minimum MinPts can be derived from the number of dimensions D in
the dataset as, MinPts >= D+1. The minimum value of MinPts must be chosen at least 3.
10. [SYLWAN., 164(4)]. ISI Indexed,Apr 2020 24
Fig 1: DBSCAN parameters
1. Find all the neighbor points within eps and identify the core points or visited with
more than MinPts neighbors.
2. For each core point if it is not already assigned to a cluster, create a new cluster.
3. Find recursively all its density connected points and assign them to the same cluster
as the corepoint.
A point a and b are said to be density connected if there exist a point c which has a
sufficient number of points in its neighbors and both the points a and b are within
the eps distance. This is a chaining process. So, if b is neighbor of c, c is neighbor
of d, d is neighbor of e, which in turn is neighbor of a implies that b is neighbor of a.
4. Iterate through the remaining unvisited points in the dataset. Those points that do not
belong to any cluster are noise.
11. [SYLWAN., 164(4)]. ISI Indexed,Apr 2020 25
Fig.2. Block Diagram Of Proposed System
The CT scan image is pre-processed through a morphological operation where the image
gets sharpened and converted into a binary image. At the end of preprocessing, a mask
image is extracted. Then active contour segmentation is applied to extract the liver region
from other parts of the human body. Through superpixel segmentation, the liver tumor is
determined and then DBSCAN clustering is used to cluster the output image. Thus the
process is used to segment the liver tumor from the healthy tissues of the liver which is
illustrated in fig 2.
CT scan image
(Liver tumor)
liver segmentationPre-Processing
Convert to gray
imageResize image Sharpened image
Binary image+
Morphological
operation
Mask creation
Super pixel
segmentation
12. [SYLWAN., 164(4)]. ISI Indexed,Apr 2020 26
IV.RESULTS AND DISCUSSION
The result of the liver segmentation through super pixel segmentation is evaluated and
corresponding output is depicted in fig 3. The Input image is undergoes in preprocessing,
liver segmentation and tumor segmentation. Finally a ground truth image is obtained.
(d) (e)
Fig 3: Segmentation of Liver tumor using Super pixel segmentation
13. [SYLWAN., 164(4)]. ISI Indexed,Apr 2020 27
(a) Input Image (b)Sharpened Image (c)Binary Image(d)
Input Ground Truth
Input Image (b)Sharpened Image (c)Binary Image(d)Super Pixel Segmentation (e)ground truth image
TABLE I
PREDICTION OF LIVER TUMOR
Ground Truth TP TN FP
1224 63521
924 63968
4614 60117
(e)ground truth image
FN
716 75
34 610
234 571
14. [SYLWAN., 164(4)]. ISI Indexed,Apr 2020 28
TABLE II
EVALUATION OF ACCURACY, SPECIFICITY, PRECISION, F-MEASURE, MCC, DICE
JACCARD FOR THE LIVER SEGMENTATION.
The liver segmentation output is predicted and calculated the true positive, true
negative, false positive and false negative value for each input image is illustrated in
table 1. The accuracy, specificity, precision, F-measure, MCC, Dice Jaccard for the
liver segmentation is carried on and compared with the existing system. Our proposed
method provides a high accuracy than existing methods is represented in table 2.
V. CONCLUSION
In this paper, we developed Superpixel based liver tumors segmentation from
abdominal CT volumes. Novel picture Active countor division, superpixel division
calculation utilizing DBSCAN bunching for tumor division. The liver part is sectioned
from the other piece of body through form division. Our proposed superpixel division
first delivers the underlying superpixel results with the comparative hues by playing out
the DBSCAN grouping calculation, and afterward consolidates the little beginning
superpixels with their closest neighbor superpixels by thinking about their shading and
spatial data. Our calculation accomplishes the best in class execution at a considerably
littler calculation cost, and altogether beats the calculations that require increasingly
computational expenses in any event, for the pictures including complex items or
complex surface locales. In future work, we will acquire better minimization of
S.NO
Accuracy Sensitivity
Specificity
Precision
F-
measure
MCC Dice Jaccard
1 0.9879 0.9423 0.9889 0.6309 0.7558 0.7657 0.7558 0.6074
2 0.9902 0.6023 0.9995 0.9645 0.7416 0.7581 0.7416 0.5893
3 0.9877 0.8899 0.9961 0.9517 0.9198 0.9137 0.9198 0.8514
15. [SYLWAN., 164(4)]. ISI Indexed,Apr 2020 29
superpixels by building up another DBSCAN calculation that has the worldwide ideal
property.
VI.SCOPE FOR FUTURE ENHANCEMENT
In a future work, we will research Hidden Markov Random Fields with Expected
Maximization for bigger scale organize estimate, and consider the effective answer for the
expanded effectiveness and lessened emphasis in a system with high portability. It is
additionally a promising future work to match the accuracy and object delineation time of
the existing system for multi-shape structures.
REFERENCES
[1].Zheng Z, Zhang X, Xu H, Liang W, Zheng S, Shi Y. A unified level set framework
combining hybrid algorithms for liver and liver tumor segmentation in ct images.
BioMed Res Int. 2018; 2018. https://doi.org/10.1155/2018/3815346.
[2].Yan J, Schwartz LH, Zhao B. Semiautomatic segmentation of liver metastases on
volumetric ct images. Med Phys. 2015; 42(11):6283–93.
[3].Massoptier L, Casciaro S. A new fully automatic and robust algorithm for fast
segmentation of liver tissue and tumors from ct scans. Eur Radiol. 2008;
18(8):1658.
[4].Wong D, Liu J, Fengshou Y, Tian Q, Xiong W, Zhou J, Qi Y, Han T, Venkatesh S,
Wang S-c. A semi-automated method for liver tumor segmentation based on 2d
region growing with knowledge-based constraints. In: MICCAI Workshop, vol. 41.
Berlin: Springer-Verlag Berlin Heidelberg: 2008. p. 159.
[5].Yim PJ, Foran DJ. Volumetry of hepatic metastases in computed tomography using
the watershed and active contour algorithms. In: 16th IEEE Symposium Computer-
Based Medical Systems, 2003. Proceedings. IEEE: 2003. p. 329–35.
https://doi.org/10.1109/cbms.2003.1212810.
[6].Park S-J, Seo K-S, Park J-A. Automatic hepatic tumor segmentation using statistical
optimal threshold. In: International Conference on Computational Science.
Springer: 2005. p. 934–40. https://doi.org/10.1007/11428831_116.
16. [SYLWAN., 164(4)]. ISI Indexed,Apr 2020 30
[7].Linguraru MG, Richbourg WJ, Watt JM, Pamulapati V, Summers RM. Liver and
tumor segmentation and analysis from ct of diseased patients via a generic affine
invariant shape parameterization and graph cuts. In: International MICCAI
Workshop on Computational and Clinical Challenges in Abdominal Imaging.
Springer: 2011. p. 198–206. https://doi.org/10.1007/978-3-642-28557-8_25.
[8].Huang W, Yang Y, Lin Z, Huang G. -B., Zhou J, Duan Y, Xiong W. Random
feature subspace ensemble based extreme learning machine for liver tumor
detection and segmentation. In: 2014 36th Annual International Conference of the
IEEE Engineering in Medicine and Biology Society. IEEE: 2014. p. 4675–8.
https://doi.org/10.1109/embc.2014.6944667.
[9].Zhou J, Xiong W, Tian Q, Qi Y, Liu J, Leow WK, Han T, Venkatesh SK, Wang S-
c. Semi-automatic segmentation of 3d liver tumors from ct scans using voxel
classification and propagational learning. In: MICCAI Workshop, vol. 41. Berlin:
Springer-Verlag Berlin Heidelberg: 2008. p. 43.
[10]. Krizhevsky A, Sutskever I, Hinton GE. Imagenet classification with deep
convolutional neural networks. In: Advances in Neural Information Processing
Systems: 2012. p. 1097–105. https://doi.org/10.1145/3065386.
[11]. Xu Y, Du J, Dai L-R, Lee C-H. An experimental study on speech
enhancement based on deep neural networks. IEEE Signal Process Lett. 2014;
21(1):65–68.
[12]. Yu L, Chen H, Dou Q, Qin J, Heng P-A. Automated melanoma recognition
in dermoscopy images via very deep residual networks. IEEE Trans Med Imaging.
2017; 36(4):994–1004.
[13]. Shin H. -C., Roth HR, Gao M, Lu L, Xu Z, Nogues I, Yao J, Mollura D,
Summers RM. Deep convolutional neural networks for computer-aided detection:
Cnn architectures, dataset characteristics and transfer learning. IEEE Trans Med
Imaging. 2016; 35(5):1285–98.
[14]. Havaei M, Davy A, Warde-Farley D, Biard A, Courville A, Bengio Y, Pal
C, Jodoin P-M, Larochelle H. Brain tumor segmentation with deep neural networks.
Med Image Anal. 2017; 35:18–31.
17. [SYLWAN., 164(4)]. ISI Indexed,Apr 2020 31
[15]. Li X, Chen H, Qi X, Dou Q, Fu C-W, Heng P-A. H-denseunet: Hybrid
densely connected unet for liver and tumor segmentation from ct volumes. IEEE
Trans Med Imaging. 2018; 37(12):2663–74.
[16]. Christ PF, Ettlinger F, Grün F, Elshaera MEA, Lipkova J, Schlecht S,
Ahmaddy F, Tatavarty S, Bickel M, Bilic P, et al.Automatic liver and tumor
segmentation of ct and mri volumes using cascaded fully convolutional neural
networks. 2017. arXiv preprint arXiv:1702.05970.
[17]. Sun C, Guo S, Zhang H, Li J, Chen M, Ma S, Jin L, Liu X, Li X, Qian X.
Automatic segmentation of liver tumors from multiphase contrast-enhanced ct
images based on fcns. Artif Intell Med. 2017; 83:58–66.
[18]. Yuan Y. Hierarchical convolutional-deconvolutional neural networks for
automatic liver and tumor segmentation. 2017. arXiv preprint arXiv:1710.04540.
[19]. Forner, A., Llovet, J. M. & Bruix, J. Hepatocellular carcinoma. The
Lancet 379, 1245–1255 (2012).
[20]. Z. Wu, Z. Hu, and Q. Fan. Superpixelbased unsupervised change detection
using multi-dimensional change vector analysis and svm-based classification.
ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2012;7:257-262.
[21]. Z. Tian, L. Liu, Z. Zhang, J. Xue, and B. Fei. A supervoxel-based
segmentation method for prostate mr images. Medical Physics. 2016.
[22]. J. Shi and J. Malik, “Normalized cuts and image segmentation,” IEEE
Trans. Pattern Anal. Mach. Intell., vol. 22, no. 8, pp. 888–905, Aug. 2000.
[23]. R. Achanta, A. Shaji, K. Smith, A. Lucchi, P. Fua, and S. Süsstrunk, “SLIC
superpixels compared to state-of-the-art superpixel methods,” IEEE Trans. Pattern
Anal. Mach. Intell., vol. 34, no. 11, pp. 2274–2282, Nov. 2012.
[24]. Z. Li and J. Chen, “Superpixel segmentation using linear spectral
clustering,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR) Jun. 2015,
pp. 1356–1363
[25]. M.-Y. Liu, O. Tuzel, S. Ramalingam, and R. Chellappa, “Entropy rate
superpixel segmentation,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun.
2011, pp. 2097–2104
18. [SYLWAN., 164(4)]. ISI Indexed,Apr 2020 32
[26]. D. Comaniciu and P. Meer, “Mean shift: A robust approach toward feature
space analysis,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 24, no. 5, pp. 603–
619, May 2002.
[27]. A. Levinshtein, A. Stere, K. N. Kutulakos, D. J. Fleet, S. J. Dickinson, and
K. Siddiqi, “TurboPixels: Fast superpixels using geometric flows,”IEEE Trans.
Pattern Anal. Mach. Intell., vol. 31, no. 12, pp. 2290–2297, Dec. 2009.
[28]. P. F. Felzenszwalb and D. P. Huttenlocher, “Efficient graph-based image
segmentation,” Int. J. Comput. Vis., vol. 59, no. 2, pp. 167–181, Sep. 2004.
[29]. M. Ester, H.-P. Kriegel, J. Sander, and X. Xu, “A density-based algorithm
for discovering clusters in large spatial databases with noise,” in Proc. 2nd Int.
Conf. Knowl. Discovery Data Mining (KDD), 1996, pp. 226–231.
[30]. Y. Zhang, R. Hartley, J. Mashford, and S. Burn, “Superpixels via pseudo-
Boolean optimization,” in Proc. IEEE Int. Conf. Comput. Vis., Nov. 2011, pp.
1387–1394.