The proposed technique is capable of segmenting the lung’s air and its soft tissues followed by estimating
the lung’s air volume and its variations throughout the image sequence. The work presents a methodology
that consists of three steps. In the first step, CT image sequence are to be given as input where histograms
of all images within the sequence are calculated and they are overlaid in order to form the sequence’s
combined histogram to extract lung air area. In the second step, segment the lung air area by using
optimum threshold based method. In the third step, voxels in the rendered volume will be counted and the
percentage of air consumption will be estimated. The result will indicate a very good ability of the
proposed method for estimating the lung’s air volume and its variations in a respiratory image sequence.
automatic detection of pulmonary nodules in lung ct imagesWookjin Choi
The document discusses lung cancer detection using CT scans and pulmonary nodule detection systems. It describes how CT scans are used to detect lung nodules early and increase survival rates. It then discusses the challenges of evaluating large CT data sets and the use of pulmonary nodule detection CAD systems to assist radiologists. The document goes on to describe a proposed CAD system that includes lung segmentation, nodule candidate detection using multi-thresholding and feature extraction, and a genetic programming based classifier to analyze features and detect nodules. Experimental results on a publicly available lung image database show the system achieved over 80% accuracy on test data for nodule detection.
Image processing in lung cancer screening and treatmentWookjin Choi
The document discusses image processing techniques for lung cancer screening and treatment. It covers topics like lung segmentation, nodule detection, computer-aided diagnosis, image-guided radiotherapy, and quantitative assessment of tumor response. Lung segmentation is used to isolate the lungs from other organs in CT images. Nodule detection algorithms then aim to find potential cancerous nodules. Computer-aided diagnosis systems analyze extracted features of nodules to determine if they are malignant or benign. Image-guided radiotherapy utilizes 4D CT and gating to account for tumor motion during treatment. Quantitative metrics like standardized uptake value are used to assess tumor response in PET imaging.
This document reviews the advances in 3D and 4D ultrasound technology for fetal cardiac scanning. Recent technologies like spatiotemporal image correlation and multiplanar reconstruction allow clinicians to generate real-time 3D/4D scans of the fetal heart. This provides significant benefits for evaluating normal and abnormal fetal heart anatomy and function. It facilitates interdisciplinary consultation, parental counseling, and professional training. While more research is still needed, 3D/4D ultrasound provides additional views of the fetal heart that can improve the accuracy of prenatal cardiac screening.
Robust breathing signal extraction from cone beam CT projections based on ada...Wookjin Choi
This document summarizes a research paper that proposes a novel method for extracting breathing signals from cone beam CT projections without using external markers. The method uses an adaptive filtering technique to enhance weak oscillating structures in the Amsterdam Shroud image generated from the projections. A two-step optimization approach is then used to reveal the large-scale regularity of the breathing signals. Evaluation on 5 patient data sets found the new algorithm outperformed existing methods by extracting less noisy signals with errors of only -0.07±1.58 breaths per minute compared to reference signals. While results are promising, the study had a small data set and image quality remains limited.
Prediction of lung cancer is most challenging problem due to structure of cancer cell, where most of the cells are overlapped each other. The image processing techniques are mostly used for prediction of lung cancer and also for early detection and treatment to prevent the lung cancer. To predict the lung cancer various features are extracted from the images therefore, pattern recognition based approaches are useful to predict the lung cancer. Here, a comprehensive review for the prediction of lung cancer by previous researcher using image processing techniques is presented. The summary for the prediction of lung cancer by previous researcher using image processing techniques is also presented.
My co-authors and I have created an R package that allows the user to perform a fully quantitative analysis of DCE-MRI (dynamic contrast-enhanced magnetic resonance imaging) data. With applications in oncology in mind, users can interrogate the perfusion characteristics of tissue in order to compare between treatment groups and pre-/post-treatment.
Image segmentation is still an active reason of research, a relevant research area
in computer vision and hundreds of image segmentation techniques have been proposed by
the researchers. All proposed techniques have their own usability and accuracy. In this paper
we are going present a review of some best lung nodule existing detection and segmentation
techniques. Finally, we conclude by focusing one of the best methods that may have high
level accuracy and can be used in detection of lung very small nodules accurately.
automatic detection of pulmonary nodules in lung ct imagesWookjin Choi
The document discusses lung cancer detection using CT scans and pulmonary nodule detection systems. It describes how CT scans are used to detect lung nodules early and increase survival rates. It then discusses the challenges of evaluating large CT data sets and the use of pulmonary nodule detection CAD systems to assist radiologists. The document goes on to describe a proposed CAD system that includes lung segmentation, nodule candidate detection using multi-thresholding and feature extraction, and a genetic programming based classifier to analyze features and detect nodules. Experimental results on a publicly available lung image database show the system achieved over 80% accuracy on test data for nodule detection.
Image processing in lung cancer screening and treatmentWookjin Choi
The document discusses image processing techniques for lung cancer screening and treatment. It covers topics like lung segmentation, nodule detection, computer-aided diagnosis, image-guided radiotherapy, and quantitative assessment of tumor response. Lung segmentation is used to isolate the lungs from other organs in CT images. Nodule detection algorithms then aim to find potential cancerous nodules. Computer-aided diagnosis systems analyze extracted features of nodules to determine if they are malignant or benign. Image-guided radiotherapy utilizes 4D CT and gating to account for tumor motion during treatment. Quantitative metrics like standardized uptake value are used to assess tumor response in PET imaging.
This document reviews the advances in 3D and 4D ultrasound technology for fetal cardiac scanning. Recent technologies like spatiotemporal image correlation and multiplanar reconstruction allow clinicians to generate real-time 3D/4D scans of the fetal heart. This provides significant benefits for evaluating normal and abnormal fetal heart anatomy and function. It facilitates interdisciplinary consultation, parental counseling, and professional training. While more research is still needed, 3D/4D ultrasound provides additional views of the fetal heart that can improve the accuracy of prenatal cardiac screening.
Robust breathing signal extraction from cone beam CT projections based on ada...Wookjin Choi
This document summarizes a research paper that proposes a novel method for extracting breathing signals from cone beam CT projections without using external markers. The method uses an adaptive filtering technique to enhance weak oscillating structures in the Amsterdam Shroud image generated from the projections. A two-step optimization approach is then used to reveal the large-scale regularity of the breathing signals. Evaluation on 5 patient data sets found the new algorithm outperformed existing methods by extracting less noisy signals with errors of only -0.07±1.58 breaths per minute compared to reference signals. While results are promising, the study had a small data set and image quality remains limited.
Prediction of lung cancer is most challenging problem due to structure of cancer cell, where most of the cells are overlapped each other. The image processing techniques are mostly used for prediction of lung cancer and also for early detection and treatment to prevent the lung cancer. To predict the lung cancer various features are extracted from the images therefore, pattern recognition based approaches are useful to predict the lung cancer. Here, a comprehensive review for the prediction of lung cancer by previous researcher using image processing techniques is presented. The summary for the prediction of lung cancer by previous researcher using image processing techniques is also presented.
My co-authors and I have created an R package that allows the user to perform a fully quantitative analysis of DCE-MRI (dynamic contrast-enhanced magnetic resonance imaging) data. With applications in oncology in mind, users can interrogate the perfusion characteristics of tissue in order to compare between treatment groups and pre-/post-treatment.
Image segmentation is still an active reason of research, a relevant research area
in computer vision and hundreds of image segmentation techniques have been proposed by
the researchers. All proposed techniques have their own usability and accuracy. In this paper
we are going present a review of some best lung nodule existing detection and segmentation
techniques. Finally, we conclude by focusing one of the best methods that may have high
level accuracy and can be used in detection of lung very small nodules accurately.
Airway tree segmentation in serial block face cryomicrotome images of rat lungsieeepondy
Airway tree segmentation in serial block face cryomicrotome images of rat lungs
IEEE PROJECTS 2014
-----------------------------------
Contact:+91-9994232214
:+91-8144199666
Email:ieeeprojectchennai@gmail.com
http://www.projectsieee.com
Support:
-------------
Projects Code
Documentation
PPT
Projects Video File
Projects Explaintion
Teamviewer Suppport
Computer-aided Detection of Pulmonary Nodules using Genetic ProgrammingWookjin Choi
This document describes a study that used genetic programming to develop an accurate classifier for detecting pulmonary nodules on CT scans. The proposed method involved segmenting the lungs, detecting nodule candidates, extracting features, and using genetic programming to evolve a combination of features and functions to classify nodules versus non-nodules. When tested on 153 nodules across 32 CT scans, the genetic programming classifier achieved a sensitivity of 92% and specificity of 86%.
IRJET - Review on Lung Cancer Detection TechniquesIRJET Journal
This document reviews techniques for detecting lung cancer through computer-aided diagnosis systems. It discusses how CAD systems use medical images to find abnormal nodules that could indicate cancer. The techniques discussed include nodule detection, image segmentation, and nodule classification. Current models first perform image segmentation, which segments both normal and abnormal nodules, potentially resulting in false cancer classifications. Improved methods classify nodules as benign or malignant before segmentation to avoid erroneous segmentations leading to missed or incorrect findings. The document surveys various preprocessing, segmentation, and classification methods used in lung cancer detection CAD systems.
computer aided detection of pulmonary nodules in ct scansWookjin Choi
The document discusses computer aided detection of pulmonary nodules in CT scans. It introduces lung cancer as a major health problem and describes how detecting nodules early can improve survival rates. It then provides an overview of pulmonary nodule detection CAD systems, describing their general structure and evaluating various approaches in the literature. Key contributions are genetic programming and shape-based classifiers and a hierarchical block analysis method that achieved high performance on a publicly available lung image database.
Computer aided detection of pulmonary nodules using genetic programmingWookjin Choi
This document describes a method for detecting pulmonary nodules in CT scans using genetic programming. It first segments the lung regions from CT images and extracts nodule candidates. Features are then extracted from the candidates. Genetic programming is used to classify candidates as nodules or non-nodules by optimizing combinations of features. The method was tested on a publicly available lung image database, achieving a true positive rate of over 90% and low false positive rate.
Early Detection of Lung Cancer Using Neural Network TechniquesIJERA Editor
Effective identification of lung cancer at an initial stage is an important and crucial aspect of image processing. Several data mining methods have been used to detect lung cancer at early stage. In this paper, an approach has been presented which will diagnose lung cancer at an initial stage using CT scan images which are in Dicom (DCM) format. One of the key challenges is to remove white Gaussian noise from the CT scan image, which is done using non local mean filter and to segment the lung Otsu’s thresholding is used. The textural and structural features are extracted from the processed image to form feature vector. In this paper, three classifiers namely SVM, ANN, and k-NN are applied for the detection of lung cancer to find the severity of disease (stage I or stage II) and comparison is made with ANN, and k-NN classifier with respect to different quality attributes such as accuracy, sensitivity(recall), precision and specificity. It has been found from results that SVM achieves higher accuracy of 95.12% while ANN achieves 92.68% accuracy on the given data set and k-NN shows least accuracy of 85.37%. SVM algorithm which achieves 95.12% accuracy helps patients to take remedial action on time and reduces mortality rate from this deadly disease.
Optimal fuzzy rule based pulmonary nodule detectionWookjin Choi
The document describes a lung cancer detection system that uses CT scans. It discusses (1) segmenting the lungs from CT images using adaptive thresholding and connected component analysis, (2) detecting nodule candidate regions using multi-thresholding and rule-based pruning, and (3) optimizing the rule-based pruning using a genetic algorithm trained fuzzy inference system to reduce false positives while maintaining high sensitivity. Experimental results on a publicly available lung image database show the optimized fuzzy system achieved better performance than a conventional rule-based approach.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
A computed tomography (CT) scan uses X-rays and digital geometry processing to produce cross-sectional images of the inside of the body. During a CT scan, an X-ray tube rotates around the body and takes pictures from different angles, which are processed by a computer to generate 2D and 3D images of tissues and organs. CT scans can identify problems like traumatic injuries, tumors, and infections by creating detailed images of internal structures like the head, chest, abdomen, arms, and legs. Contrast material may sometimes be used to better visualize certain areas.
Ionizing radiation makes invasive cardiology procedures such as coronary angiography, percutaneous coronary intervention (PCI), and electrophysiologic diagnostics and therapeutics possible .
Radiation risks can be thought of as deterministic (effects after exceeding certain threshold, e.g., skin burns) or stochastic (a risk of an outcome is proportional to the dose received, e.g., malignancy or teratogenicity) .
Reducing the radiation exposure in the cardiac catheterization laboratory is important, especially as procedures are becoming more complex .
Compressed sensing dynamic cardiac cine mri using learned spatiotemporal dict...ieeepondy
This document discusses a technique that uses compressed sensing and a learned 3D spatiotemporal dictionary to improve the spatiotemporal resolution of dynamic cardiac cine MRI. The technique divides image sequences into overlapping patches and uses the dictionary to provide sparse representations of the patches. Experimental results on in vivo cardiac data show the method can accelerate imaging by up to 8 times while outperforming existing compressed sensing methods at high accelerations, allowing for higher spatiotemporal resolution dynamic imaging.
Identification of Robust Normal Lung CT Texture FeaturesWookjin Choi
Normal lung CT texture features have been used for the prediction of radiation-induced lung disease (radiation pneumonitis and radiation fibrosis). For these features to be clinically useful, they need to be relatively invariant (robust) to tumor size and not correlated with normal lung volume.
http://scitation.aip.org/content/aapm/journal/medphys/43/6/10.1118/1.4955803
The use of computed tomography (CT) scan presentationJohnson Mwove
The document discusses the use of computed tomography (CT) scans to assess body composition. It notes that CT scans can provide highly accurate and detailed images of tissues like fat, muscle, and organs. The advantages of CT scans are its high resolution images, ability to quantify tissues, and clinical usefulness when images are already being taken. However, concerns include the costs, specialized skills and equipment required, and risks of radiation exposure. The document concludes that while radiation risks exist, CT scans remain very useful for diagnostic purposes due to their image quality and information provided.
The document provides an overview of the history and development of computed tomography (CT) scanning. It discusses how CT was pioneered by Godfrey Hounsfield and Allan Cormack in the 1970s, for which they received the 1979 Nobel Prize. It describes the early prototype CT scanners and technological advances that increased scanning speed, such as the introduction of spiral/helical scanning. The document also outlines the basic principles of CT imaging and image reconstruction methods.
Logic coverage criteria are central aspect of programs and specifications in the testing of software. A
Boolean expression with n variables in expression 2n distinct test cases is required for exhaustive testing
.This is expensive even when n is modestly large. The possible solution is to select a small subset of all
possible test cases which can effectively detect most of the common faults. Test case prioritization is one of
the key techniques for making testing cost effective. In present study performance index of test suite is
calculated for two Boolean specification testing techniques MUMCUT and Minimal-MUMCUT.
Performance index helps to measure the efficiency and determine when testing can be stopped in case of
limited resources. This paper evaluates the testability of generated single faults according to the number of
test cases used to detect them. Test cases are generated from logic expressions in irredundant normal form
(IDNF) derived from specifications or source code. The efficiency of prioritization techniques has been
validated by an empirical study done on bench mark expressions using Performance Index (PI) metric.
Grid computing or network computing is developed to make the available electric power in the similar way
as it is available for the grid. For that we just plug in the power and whoever needs power, may use it. In
grid computing if a system needs more power than available it can share the computing with other
machines connected in a grid. In this way we can use the power of a super computer without a huge cost
and the CPU cycles that were wasted previously can also be utilized. For performing grid computation in
joined computers through the internet, the software must be installed which supports grid computation on
each computer inside the VO. The software handles information queries, storage management, processing
scheduling, authentication and data encryption to ensure information security.
Quality estimation of machine translation outputs through stemmingijcsa
Machine Translation is the challenging problem for Indian languages. Every day we can see some machine
translators being developed , but getting a high quality automatic translation is still a very distant dream .
The correct translated sentence for Hindi language is rarely found. In this paper, we are emphasizing on
English-Hindi language pair, so in order to preserve the correct MT output we present a ranking system,
which employs some machine learning techniques and morphological features. In ranking no human
intervention is required. We have also validated our results by comparing it with human ranking.
Algeria engages with determination on the path renewable energies to bring global and long-lasting
solutions to the environmental challenges and to the problems of conservation of the energy resources of
fossil origin. Our study is interested on the wind spinneret which seems one of the most promising with a
very high global growth rate. The object of this article is to estimate the wind deposit of the region of Oran
(Es Senia), important stage in any planning and realization of wind plant. In our work, we began with the
processing of schedules data relative to the wind collected over a period of more than 50 years, to evaluate
the wind potential while determining its frequencies. Then, we calculated the total electrical energy
produced at various heights with three types of wind turbines.The analysis of the results shows that the
wind turbines of major powers allow producing important quantities of energy when we increase the height
of their hubs to take advantage of stronger speeds of wind.
MOVEMENT ASSISTED COMPONENT BASED SCALABLE FRAMEWORK FOR DISTRIBUTED WIRELESS...ijcsa
Intelligent networks are becoming more enveloping and dwelling a new generation of applications are
deployed over the peer-to-peer networks. Intelligent networks are very attractive because of their role in
improving the scalability and enhancing performance by enabling direct and real-time communication
among the participating network stations. A suitable solution for resource management in distributed wireless systems is required which should support fault-tolerant operations, requested resources (at shortest path), minimize overhead generation during network management, balancing the load distribution between the participating stations and high probability of lookup success and many more. This article
presents a Movement Assisted Component Based Scalable Framework (MAC-SF) for the distributed
network which manages the distributed wireless resources and applications; monitors the behavior of the
distributed wireless applications transparently and attains accurate resource projections, manages the
connections between the participating network stations and distributes the active objects in response to the
user requests and changing processing and network conditions. This system is also compared with some
exiting systems. Results shows that MAC-SF is a better system and can be used in any wireless network.
Effect of mesh grid structure in reducing hot carrier effect of nmos device s...ijcsa
This paper presents the critical effect of mesh grid that should be considered during process and device
simulation using modern TCAD tools in order to develop and optimize their accurate electrical
characteristics. Here, the computational modelling process of developing the NMOS device structure is
performed in Athena and Atlas. The effect of Mesh grid on net doping profile, n++, and LDD sheet
resistance that could link to unwanted “Hot Carrier Effect” were investigated by varying the device grid
resolution in both directions. It is found that y-grid give more profound effect in the doping concentration,
the junction depth formation and the value of threshold voltage during simulation. Optimized mesh grid is
obtained and tested for more accurate and faster simulation. Process parameter (such as oxide thicknesses
and Sheet resistance) as well as Device Parameter (such as linear gain “beta” and SPICE level 3 mobility
roll-off parameter “ Theta”) are extracted and investigated for further different applications.
Hybrid hmmdtw based speech recognition with kernel adaptive filtering methodijcsa
This document summarizes a research paper that proposes a new approach for speech recognition using kernel adaptive filtering for speech enhancement and a hybrid HMM/DTW method for recognition. It first discusses adaptive filters and the LMS algorithm, then introduces kernel adaptive filters using the KLMS algorithm to transform input data into a high-dimensional feature space. Finally, it describes using HMM to train speech features and DTW for classification and recognition. The experimental results showed an improvement in recognition rates compared to traditional methods.
REAL TIME SPECIAL EFFECTS GENERATION AND NOISE FILTRATION OF AUDIO SIGNAL USI...ijcsa
Digital signal processing is being increasingly used for audio processing applications. Digital audio effects
refer to all those algorithms that are used for enhancing sound in any of the steps of a processing chain of
music production. Real time audio effects generation is a highly challenging task in the field of signal
processing. Now a day, almost every high end multimedia audio device does digital signal processing in
one form or another. For years musicians have used different techniques to give their music a unique
sound. Earlier, these techniques were implemented after a lot of work and experimentation. However, now
with the emergence of digital signal processing this task is simplified to a great extent. In this article, the
generations of special effects like echo, flanging, reverberation, stereo, karaoke, noise filtering etc are
successfully implemented using MATLAB and an attractive GUI has been designed for the same.
Airway tree segmentation in serial block face cryomicrotome images of rat lungsieeepondy
Airway tree segmentation in serial block face cryomicrotome images of rat lungs
IEEE PROJECTS 2014
-----------------------------------
Contact:+91-9994232214
:+91-8144199666
Email:ieeeprojectchennai@gmail.com
http://www.projectsieee.com
Support:
-------------
Projects Code
Documentation
PPT
Projects Video File
Projects Explaintion
Teamviewer Suppport
Computer-aided Detection of Pulmonary Nodules using Genetic ProgrammingWookjin Choi
This document describes a study that used genetic programming to develop an accurate classifier for detecting pulmonary nodules on CT scans. The proposed method involved segmenting the lungs, detecting nodule candidates, extracting features, and using genetic programming to evolve a combination of features and functions to classify nodules versus non-nodules. When tested on 153 nodules across 32 CT scans, the genetic programming classifier achieved a sensitivity of 92% and specificity of 86%.
IRJET - Review on Lung Cancer Detection TechniquesIRJET Journal
This document reviews techniques for detecting lung cancer through computer-aided diagnosis systems. It discusses how CAD systems use medical images to find abnormal nodules that could indicate cancer. The techniques discussed include nodule detection, image segmentation, and nodule classification. Current models first perform image segmentation, which segments both normal and abnormal nodules, potentially resulting in false cancer classifications. Improved methods classify nodules as benign or malignant before segmentation to avoid erroneous segmentations leading to missed or incorrect findings. The document surveys various preprocessing, segmentation, and classification methods used in lung cancer detection CAD systems.
computer aided detection of pulmonary nodules in ct scansWookjin Choi
The document discusses computer aided detection of pulmonary nodules in CT scans. It introduces lung cancer as a major health problem and describes how detecting nodules early can improve survival rates. It then provides an overview of pulmonary nodule detection CAD systems, describing their general structure and evaluating various approaches in the literature. Key contributions are genetic programming and shape-based classifiers and a hierarchical block analysis method that achieved high performance on a publicly available lung image database.
Computer aided detection of pulmonary nodules using genetic programmingWookjin Choi
This document describes a method for detecting pulmonary nodules in CT scans using genetic programming. It first segments the lung regions from CT images and extracts nodule candidates. Features are then extracted from the candidates. Genetic programming is used to classify candidates as nodules or non-nodules by optimizing combinations of features. The method was tested on a publicly available lung image database, achieving a true positive rate of over 90% and low false positive rate.
Early Detection of Lung Cancer Using Neural Network TechniquesIJERA Editor
Effective identification of lung cancer at an initial stage is an important and crucial aspect of image processing. Several data mining methods have been used to detect lung cancer at early stage. In this paper, an approach has been presented which will diagnose lung cancer at an initial stage using CT scan images which are in Dicom (DCM) format. One of the key challenges is to remove white Gaussian noise from the CT scan image, which is done using non local mean filter and to segment the lung Otsu’s thresholding is used. The textural and structural features are extracted from the processed image to form feature vector. In this paper, three classifiers namely SVM, ANN, and k-NN are applied for the detection of lung cancer to find the severity of disease (stage I or stage II) and comparison is made with ANN, and k-NN classifier with respect to different quality attributes such as accuracy, sensitivity(recall), precision and specificity. It has been found from results that SVM achieves higher accuracy of 95.12% while ANN achieves 92.68% accuracy on the given data set and k-NN shows least accuracy of 85.37%. SVM algorithm which achieves 95.12% accuracy helps patients to take remedial action on time and reduces mortality rate from this deadly disease.
Optimal fuzzy rule based pulmonary nodule detectionWookjin Choi
The document describes a lung cancer detection system that uses CT scans. It discusses (1) segmenting the lungs from CT images using adaptive thresholding and connected component analysis, (2) detecting nodule candidate regions using multi-thresholding and rule-based pruning, and (3) optimizing the rule-based pruning using a genetic algorithm trained fuzzy inference system to reduce false positives while maintaining high sensitivity. Experimental results on a publicly available lung image database show the optimized fuzzy system achieved better performance than a conventional rule-based approach.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
A computed tomography (CT) scan uses X-rays and digital geometry processing to produce cross-sectional images of the inside of the body. During a CT scan, an X-ray tube rotates around the body and takes pictures from different angles, which are processed by a computer to generate 2D and 3D images of tissues and organs. CT scans can identify problems like traumatic injuries, tumors, and infections by creating detailed images of internal structures like the head, chest, abdomen, arms, and legs. Contrast material may sometimes be used to better visualize certain areas.
Ionizing radiation makes invasive cardiology procedures such as coronary angiography, percutaneous coronary intervention (PCI), and electrophysiologic diagnostics and therapeutics possible .
Radiation risks can be thought of as deterministic (effects after exceeding certain threshold, e.g., skin burns) or stochastic (a risk of an outcome is proportional to the dose received, e.g., malignancy or teratogenicity) .
Reducing the radiation exposure in the cardiac catheterization laboratory is important, especially as procedures are becoming more complex .
Compressed sensing dynamic cardiac cine mri using learned spatiotemporal dict...ieeepondy
This document discusses a technique that uses compressed sensing and a learned 3D spatiotemporal dictionary to improve the spatiotemporal resolution of dynamic cardiac cine MRI. The technique divides image sequences into overlapping patches and uses the dictionary to provide sparse representations of the patches. Experimental results on in vivo cardiac data show the method can accelerate imaging by up to 8 times while outperforming existing compressed sensing methods at high accelerations, allowing for higher spatiotemporal resolution dynamic imaging.
Identification of Robust Normal Lung CT Texture FeaturesWookjin Choi
Normal lung CT texture features have been used for the prediction of radiation-induced lung disease (radiation pneumonitis and radiation fibrosis). For these features to be clinically useful, they need to be relatively invariant (robust) to tumor size and not correlated with normal lung volume.
http://scitation.aip.org/content/aapm/journal/medphys/43/6/10.1118/1.4955803
The use of computed tomography (CT) scan presentationJohnson Mwove
The document discusses the use of computed tomography (CT) scans to assess body composition. It notes that CT scans can provide highly accurate and detailed images of tissues like fat, muscle, and organs. The advantages of CT scans are its high resolution images, ability to quantify tissues, and clinical usefulness when images are already being taken. However, concerns include the costs, specialized skills and equipment required, and risks of radiation exposure. The document concludes that while radiation risks exist, CT scans remain very useful for diagnostic purposes due to their image quality and information provided.
The document provides an overview of the history and development of computed tomography (CT) scanning. It discusses how CT was pioneered by Godfrey Hounsfield and Allan Cormack in the 1970s, for which they received the 1979 Nobel Prize. It describes the early prototype CT scanners and technological advances that increased scanning speed, such as the introduction of spiral/helical scanning. The document also outlines the basic principles of CT imaging and image reconstruction methods.
Logic coverage criteria are central aspect of programs and specifications in the testing of software. A
Boolean expression with n variables in expression 2n distinct test cases is required for exhaustive testing
.This is expensive even when n is modestly large. The possible solution is to select a small subset of all
possible test cases which can effectively detect most of the common faults. Test case prioritization is one of
the key techniques for making testing cost effective. In present study performance index of test suite is
calculated for two Boolean specification testing techniques MUMCUT and Minimal-MUMCUT.
Performance index helps to measure the efficiency and determine when testing can be stopped in case of
limited resources. This paper evaluates the testability of generated single faults according to the number of
test cases used to detect them. Test cases are generated from logic expressions in irredundant normal form
(IDNF) derived from specifications or source code. The efficiency of prioritization techniques has been
validated by an empirical study done on bench mark expressions using Performance Index (PI) metric.
Grid computing or network computing is developed to make the available electric power in the similar way
as it is available for the grid. For that we just plug in the power and whoever needs power, may use it. In
grid computing if a system needs more power than available it can share the computing with other
machines connected in a grid. In this way we can use the power of a super computer without a huge cost
and the CPU cycles that were wasted previously can also be utilized. For performing grid computation in
joined computers through the internet, the software must be installed which supports grid computation on
each computer inside the VO. The software handles information queries, storage management, processing
scheduling, authentication and data encryption to ensure information security.
Quality estimation of machine translation outputs through stemmingijcsa
Machine Translation is the challenging problem for Indian languages. Every day we can see some machine
translators being developed , but getting a high quality automatic translation is still a very distant dream .
The correct translated sentence for Hindi language is rarely found. In this paper, we are emphasizing on
English-Hindi language pair, so in order to preserve the correct MT output we present a ranking system,
which employs some machine learning techniques and morphological features. In ranking no human
intervention is required. We have also validated our results by comparing it with human ranking.
Algeria engages with determination on the path renewable energies to bring global and long-lasting
solutions to the environmental challenges and to the problems of conservation of the energy resources of
fossil origin. Our study is interested on the wind spinneret which seems one of the most promising with a
very high global growth rate. The object of this article is to estimate the wind deposit of the region of Oran
(Es Senia), important stage in any planning and realization of wind plant. In our work, we began with the
processing of schedules data relative to the wind collected over a period of more than 50 years, to evaluate
the wind potential while determining its frequencies. Then, we calculated the total electrical energy
produced at various heights with three types of wind turbines.The analysis of the results shows that the
wind turbines of major powers allow producing important quantities of energy when we increase the height
of their hubs to take advantage of stronger speeds of wind.
MOVEMENT ASSISTED COMPONENT BASED SCALABLE FRAMEWORK FOR DISTRIBUTED WIRELESS...ijcsa
Intelligent networks are becoming more enveloping and dwelling a new generation of applications are
deployed over the peer-to-peer networks. Intelligent networks are very attractive because of their role in
improving the scalability and enhancing performance by enabling direct and real-time communication
among the participating network stations. A suitable solution for resource management in distributed wireless systems is required which should support fault-tolerant operations, requested resources (at shortest path), minimize overhead generation during network management, balancing the load distribution between the participating stations and high probability of lookup success and many more. This article
presents a Movement Assisted Component Based Scalable Framework (MAC-SF) for the distributed
network which manages the distributed wireless resources and applications; monitors the behavior of the
distributed wireless applications transparently and attains accurate resource projections, manages the
connections between the participating network stations and distributes the active objects in response to the
user requests and changing processing and network conditions. This system is also compared with some
exiting systems. Results shows that MAC-SF is a better system and can be used in any wireless network.
Effect of mesh grid structure in reducing hot carrier effect of nmos device s...ijcsa
This paper presents the critical effect of mesh grid that should be considered during process and device
simulation using modern TCAD tools in order to develop and optimize their accurate electrical
characteristics. Here, the computational modelling process of developing the NMOS device structure is
performed in Athena and Atlas. The effect of Mesh grid on net doping profile, n++, and LDD sheet
resistance that could link to unwanted “Hot Carrier Effect” were investigated by varying the device grid
resolution in both directions. It is found that y-grid give more profound effect in the doping concentration,
the junction depth formation and the value of threshold voltage during simulation. Optimized mesh grid is
obtained and tested for more accurate and faster simulation. Process parameter (such as oxide thicknesses
and Sheet resistance) as well as Device Parameter (such as linear gain “beta” and SPICE level 3 mobility
roll-off parameter “ Theta”) are extracted and investigated for further different applications.
Hybrid hmmdtw based speech recognition with kernel adaptive filtering methodijcsa
This document summarizes a research paper that proposes a new approach for speech recognition using kernel adaptive filtering for speech enhancement and a hybrid HMM/DTW method for recognition. It first discusses adaptive filters and the LMS algorithm, then introduces kernel adaptive filters using the KLMS algorithm to transform input data into a high-dimensional feature space. Finally, it describes using HMM to train speech features and DTW for classification and recognition. The experimental results showed an improvement in recognition rates compared to traditional methods.
REAL TIME SPECIAL EFFECTS GENERATION AND NOISE FILTRATION OF AUDIO SIGNAL USI...ijcsa
Digital signal processing is being increasingly used for audio processing applications. Digital audio effects
refer to all those algorithms that are used for enhancing sound in any of the steps of a processing chain of
music production. Real time audio effects generation is a highly challenging task in the field of signal
processing. Now a day, almost every high end multimedia audio device does digital signal processing in
one form or another. For years musicians have used different techniques to give their music a unique
sound. Earlier, these techniques were implemented after a lot of work and experimentation. However, now
with the emergence of digital signal processing this task is simplified to a great extent. In this article, the
generations of special effects like echo, flanging, reverberation, stereo, karaoke, noise filtering etc are
successfully implemented using MATLAB and an attractive GUI has been designed for the same.
Graph coloring is the assignment of colors to the graph vertices and edges in the graph theory. We can
divide the graph coloring in two types. The first is vertex coloring and the second is edge coloring. The
condition which we follow in graph coloring is that the incident vertices/edges have not the same color.
There are some algorithms which solve the problem of graph coloring. Some are offline algorithm and
others are online algorithm. Where offline means the graph is known in advance and the online means that
the edges of the graph are arrive one by one as an input, and We need to color each edge as soon as it is
added to the graph and the main issue is that we want to minimize the number of colors. We cannot change
the color of an edge after colored in an online algorithm. In this paper, we improve the online algorithm
for edge coloring. There is also a theorem which proves that if the maximum degree of a graph is Δ, then it
is possible to color its edges, in polynomial time, using at most Δ+ 1 color. The algorithm provided by
Vizing is offline, i.e., it assumes the whole graph is known in advance. In online algorithm edges arrive one
by one in a random permutation. This online algorithm is inspired by a distributed offline algorithm of
Panconesi and Srinivasan, referred as PS algorithm, works on 2-rounds which we extend by reusing colors
online in multiple rounds.
In this paper a new evolutionary algorithm, for continuous nonlinear optimization problems, is surveyed.
This method is inspired by the life of a bird, called Cuckoo.
The Cuckoo Optimization Algorithm (COA) is evaluated by using the Rastrigin function. The problem is a
non-linear continuous function which is used for evaluating optimization algorithms. The efficiency of the
COA has been studied by obtaining optimal solution of various dimensions Rastrigin function in this paper.
The mentioned function also was solved by FA and ABC algorithms. Comparing the results shows the COA
has better performance than other algorithms.
Application of algorithm to test function has proven its capability to deal with difficult optimization
Security & privacy issues of cloud & grid computing networksijcsa
Cloud computing is a new field in Internet computing that provides novel perspectives in internetworking
technologies. Cloud computing has become a significant technology in field of information technology.
Security of confidential data is a very important area of concern as it can make way for very big problems
if unauthorized users get access to it. Cloud computing should have proper techniques where data is
segregated properly for data security and confidentiality. This paper strives to compare and contrast cloud
computing with grid computing, along with the Tools and simulation environment & Tips to store data and
files safely in Cloud.
Augmented split –protocol; an ultimate d do s defenderijcsa
Distributed Denials of Service (DDoS) attacks have become the daunting problem for businesses, state
administrator and computer system users. Prevention and detection of a DDoS attack is a major research
topic for researchers throughout the world. As new remedies are developed to prevent or mitigate DDoS
attacks, invaders are continually evolving new methods to circumvent these new procedures. In this paper,
we describe various DDoS attack mechanisms, categories, scope of DDoS attacks and their existing
countermeasures. In response, we propose to introduce DDoS resistant Augmented Split-protocol (ASp).
The migratory nature and role changeover ability of servers in Split-protocol architecture will avoid
bottleneck at the server side. It also offers the unique ability to avoid server saturation and compromise
from DDoS attacks. The goal of this paper is to present the concept and performance of (ASp) as a
defensive tool against DDoS attacks.
AN ENHANCED EDGE ADAPTIVE STEGANOGRAPHY APPROACH USING THRESHOLD VALUE FOR RE...ijcsa
The document summarizes an enhanced edge adaptive steganography approach using a threshold value for region selection. It aims to improve the quality and modification rate of a stego image compared to Sobel and Canny edge detection techniques. The proposed approach uses a threshold value to select high frequency pixels from the cover image for data embedding using LSBMR. Experimental results on 100 images show about a 0.2-0.6% improvement in image quality measured by PSNR and a 4-10% improvement in modification rate measured by MSE compared to Sobel and Canny edge detection.
Mining sequential patterns for interval basedijcsa
Sequential pattern mining finds the frequent subsequences or patterns from the given sequences.
TPrefixSpan algorithm finds the relevant frequent patterns from the given sequential patterns formed using
interval based events. In our proposed work, we add multiple constraints like item, length and aggregate to
the interval based TPrefixSpan algorithm. By adding these constraints the efficiency and effectiveness of
the algorithm improves. The proposed constraint based algorithm CTPrefixSpan has been applied to
synthetic medical dataset. The algorithm can be applied for stock market analysis, DNA sequences analysis
etc.
KEYWORDS
Sequential patterns, temporal patterns, Constraints, Interval based events.
Nocs performance improvement using parallel transmission through wireless linksijcsa
This document discusses improving the performance of Network-on-Chip (NoC) using parallel transmission through wireless links. It proposes a method for transmitting and receiving flits in parallel through wireless links using a parallel buffer structure. Simulation results show this approach can reduce energy consumption by up to 30% for all-to-all traffic and 15% for transpose traffic. It can also improve latency as a function of packet injection rate by up to 71% for all-to-all traffic and 19% for transpose traffic.
1) Tenface is a new service apartment/hotel in Bangkok that draws inspiration from the ancient Thai fantasy tale of Ramakien and its ten-faced giant Tosakan.
2) The hotel aims to embody the essence of Tosakan by fusing passion and wit with a lust for life and devotion to knowledge.
3) Tenface offers guests luxurious and restful stays in 79 suites, from one night to one year, through elegant and soothing design that evokes the ancient fantasy that inspired it.
The document summarizes festivals and attractions in the Surin and Buriram provinces of Thailand. It describes the annual elephant round-up festival in Surin, which features over 200 elephants performing rituals and games. It also mentions several cultural, historical, and natural sites worth visiting in the region, including Khmer temples, forest reserves, and villages of unique ethnic groups. The hilltop temple of Phanom Rung is highlighted as a major attraction, featuring impressive Khmer architecture from the same era as Angkor Wat.
The Dynasty Chinese restaurant in Bangkok offers both traditional Cantonese dishes as well as innovative specialties created by its talented chef. During a lunch sampling various dim sum and dishes, the writer was impressed by several of the chef's creative dishes, including a unnamed sautéed prawn dish. The restaurant distinguishes itself from other Cantonese restaurants in the city by providing both traditional favorites and updated dishes not typically found on Chinese menus, such as the chef's own shark's fin carrot soup and beef stew dishes.
Le Danang is a Vietnamese restaurant located in the Central Plaza Hotel that aims to recreate the atmosphere of jazz clubs in Saigon during the 1920s-1950s when France controlled Vietnam. The decor features dark wood, green wallpaper, and leather chairs meant to evoke a speakeasy-style club. Though the setting is retro, the cuisine offers over 60 modern and authentic Vietnamese dishes. The reviewer's meal highlighted appetizers like spring rolls and grilled prawns, soup like tom yam-inspired canh chua, and mains like soft-shelled crab and pork spare ribs, finding the flavors to be fresh and balanced. The combination of jazz, French architecture, and
The Xing Fu Chinese restaurant in Bangkok offers a unique menu that mixes traditional Cantonese dishes with some Indian/Chinese combinations and Szechwan dishes. The soup and spicy bean curd are particularly good representations of Szechwan cuisine. Dim sum offerings like crab claws and radish cakes are also highlights. Dishes like the deep-fried prawns and sizzling prawns with black pepper sauce are highly recommended. Though the menu is smaller than typical Chinese restaurants, it offers a variety of regional Chinese styles through a selection of just a few dishes each. This approach results in well-thought out dishes and a dining experience that cannot be found elsewhere in the city.
This document proposes a computer-aided lung cancer classification system using curvelet features and an ensemble classifier. It first pre-processes CT images using adaptive histogram equalization to improve contrast. Then it segments the images using kernelized fuzzy c-means clustering. Curvelet features are extracted from the segmented regions and an ensemble classifier is applied to classify regions as benign or malignant. The proposed approach achieves reliable and accurate classification results compared to existing methods, with better performance metrics like accuracy, sensitivity and specificity.
Cancerous lung nodule detection in computed tomography imagesTELKOMNIKA JOURNAL
Diagnosis the computed tomography images (CT-images) is one of the images that may take a lot of time in diagnosis by the radiologist and may miss some of cancerous nodules in these images. Therefore, in this paper a new novel enhancement and detection cancerous nodule algorithm is proposed to diagnose a CT-images. The novel algorithm is divided into three main stages. In first stage, suspicious regions are enhanced using modified LoG algorithm. Then in stage two, a potential cancerous nodule was detected based on visual appearance in lung. Finally, five texture features analysis algorithm is implemented to reduce number of detected FP regions. This algorithm is evaluated using 60 cases (normal and cancerous cases), and it shows a high sensitivity in detecting the cancerous lung nodules with TP ration 97% and with FP ratio 25 cluster/image.
A novel CAD system to automatically detect cancerous lung nodules using wav...IJECEIAES
A novel cancerous nodules detection algorithm for computed tomography images (CT-images) is presented in this paper. CT-images are large size images with high resolution. In some cases, number of cancerous lung nodule lesions may missed by the radiologist due to fatigue. A CAD system that is proposed in this paper can help the radiologist in detecting cancerous nodules in CT- images. The proposed algorithm is divided to four stages. In the first stage, an enhancement algorithm is implement to highlight the suspicious regions. Then in the second stage, the region of interest will be detected. The adaptive SVM and wavelet transform techniques are used to reduce the detected false positive regions. This algorithm is evaluated using 60 cases (normal and cancerous cases), and it shows a high sensitivity in detecting the cancerous lung nodules with TP ration 94.5% and with FP ratio 7 cluster/image.
An Enhanced ILD Diagnosis Method using DWTIOSR Journals
1. The document describes an enhanced method for diagnosing Interstitial Lung Disease (ILD) using Discrete Wavelet Transform (DWT).
2. The method involves acquiring CT lung images, enhancing edges using DWT, segmenting the lung region, segmenting blood vessels, extracting texture features from vessels, and classifying images as normal or ILD using Fuzzy Support Vector Machines (FSVM).
3. Wavelet edge enhancement improves segmentation of vessels. Feature extraction using co-occurrence matrices and discriminant analysis reduces dimensions before FSVM classification. The method achieves accurate ILD diagnosis compared to existing approaches.
IRJET - A Review on Segmentation of Chest RadiographsIRJET Journal
This document reviews and compares various techniques for segmenting anatomical structures from chest radiographs. It begins with an introduction to image segmentation and its importance in medical imaging. It then describes 12 different segmentation methods that have been used for segmenting lungs and other structures from chest radiographs, including active shape models, active appearance models, pixel classification, visual saliency, convolutional neural networks, and others. For each method, it provides details on the algorithm and compares their performance based on accuracy, sensitivity and specificity. In conclusion, it discusses some of the challenges of medical image segmentation and suggests that hybrid approaches combining multiple techniques may be most effective.
This document summarizes a paper that proposes an automated method for segmenting lungs from digital tomosynthesis (DTS) images. DTS produces blurred slice images of the chest that are harder to segment than CT images. The proposed method combines three approaches: intensity-based segmentation, gradient-based segmentation, and energy-based segmentation. It starts from a previously published gradient-based dynamic programming approach but adds improvements to increase robustness. Experimental results on simulated DTS images generated from CT images show the combined method reduces incorrectly segmented lung regions compared to previous methods.
Using Distance Measure based Classification in Automatic Extraction of Lungs ...sipij
We introduce in this paper a reliable method for automatic extraction of lungs nodules from CT chest
images and shed the light on the details of using the Weighted Euclidean Distance (WED) for classifying
lungs connected components into nodule and not-nodule. We explain also using Connected Component
Labeling (CCL) in an effective and flexible method for extraction of lungs area from chest CT images with
a wide variety of shapes and sizes. This lungs extraction method makes use of, as well as CCL, some
morphological operations. Our tests have shown that the performance of the introduce method is high.
Finally, in order to check whether the method works correctly or not for healthy and patient CT images, we
tested the method by some images of healthy persons and demonstrated that the overall performance of the
method is satisfactory.
Using Distance Measure based Classification in Automatic Extraction of Lungs ...sipij
We introduce in this paper a reliable method for automatic extraction of lungs nodules from CT chest
images and shed the light on the details of using the Weighted Euclidean Distance (WED) for classifying
lungs connected components into nodule and not-nodule. We explain also using Connected Component
Labeling (CCL) in an effective and flexible method for extraction of lungs area from chest CT images with
a wide variety of shapes and sizes. This lungs extraction method makes use of, as well as CCL, some
morphological operations. Our tests have shown that the performance of the introduce method is high.
Finally, in order to check whether the method works correctly or not for healthy and patient CT images, we
tested the method by some images of healthy persons and demonstrated that the overall performance of the
method is satisfactory.
Using Distance Measure based Classification in Automatic Extraction of Lungs ...sipij
We introduce in this paper a reliable method for automatic extraction of lungs nodules from CT chest
images and shed the light on the details of using the Weighted Euclidean Distance (WED) for classifying
lungs connected components into nodule and not-nodule. We explain also using Connected Component
Labeling (CCL) in an effective and flexible method for extraction of lungs area from chest CT images with
a wide variety of shapes and sizes. This lungs extraction method makes use of, as well as CCL, some
morphological operations. Our tests have shown that the performance of the introduce method is high.
Finally, in order to check whether the method works correctly or not for healthy and patient CT images, we
tested the method by some images of healthy persons and demonstrated that the overall performance of the
method is satisfactory.
An algorithm for obtaining the frequency and the times of respiratory phases...IJECEIAES
This work proposes a computational algorithm which extracts the frequency, timings and signal segments corresponding to respiratory phases, through buccal and nasal acoustic signal processing. The proposal offers a computational solution for medical applications which require on-site or remote patient monitoring and evaluation of pulmonary pathologies, such as coronavirus disease 2019 (COVID-19). The state of the art presents a few respiratory evaluation proposals through buccal and nasal acoustic signals. Most proposals focus on respiratory signals acquired by a medical professional, using stethoscopes and electrodes located on the thorax. In this case the signal acquisition process is carried out through the use of a low cost and easy to use mask, which is equipped with strategically positioned and connected electret microphones, to maximize the proposed algorithm’s performance. The algorithm employs signal processing techniques such as signal envelope detection, decimation, fast Fourier transform (FFT) and detection of peaks and time intervals via estimation of local maxima and minima in a signal’s envelope. For the validation process a database of 32 signals of different respiratory modes and frequencies was used. Results show a maximum average error of 2.23% for breathing rate, 2.81% for expiration time and 3.47% for inspiration time.
USING DISTANCE MEASURE BASED CLASSIFICATION IN AUTOMATIC EXTRACTION OF LUNGS ...sipij
We introduce in this paper a reliable method for automatic extraction of lungs nodules from CT chest
images and shed the light on the details of using the Weighted Euclidean Distance (WED) for classifying
lungs connected components into nodule and not-nodule. We explain also using Connected Component
Labeling (CCL) in an effective and flexible method for extraction of lungs area from chest CT images with
a wide variety of shapes and sizes. This lungs extraction method makes use of, as well as CCL, some
morphological operations. Our tests have shown that the performance of the introduce method is high.
Finally, in order to check whether the method works correctly or not for healthy and patient CT images, we
tested the method by some images of healthy persons and demonstrated that the overall performance of the
method is satisfactory.
Lung Nodule Segmentation in CT Images using Rotation Invariant Local Binary P...IDES Editor
As the lung cancer is the leading cause of cancer
death in the medical field, Computed Tomography (CT) scan
of the thorax is widely applied in diagnoses for identifying
the lung cancer. In this paper, a technique of rotation invariant
with Local Binary Pattern (LBP) for segmentation of various
lung nodules from the Lung CT cancer data sets is used. This
is tested on various lung data sets from teaching files of
Casimage database and National Cancer Institute (NCI) of
National Biomedical Imaging Archive (NBIA). The results
show the segmented nodules with clear boundaries, which is
helpful in diagnosis of lung cancer. Further, the results are
compared with the watershed segmentation method, which
shows that LBP based method yields better segmentation
accuracy.
The document discusses brachytherapy treatment planning. It describes the key steps in brachytherapy treatment planning which include defining the planning target volume and organs at risk, reconstructing implanted sources or catheters, calculating and optimizing the dose distribution, and evaluating the dose distribution. Treatment planning can be done in 2D or 3D. 3D treatment planning involves reconstructing catheters or applicators using imaging modalities like CT or MRI to localize targets and organs at risk and calculate and evaluate the dose more accurately. Dose calculation is described using the TG-43 formalism. Dose evaluation is done using isodose curves and DVH similar to external beam therapy.
A new procedure for lung region segmentation from computed tomography imagesIJECEIAES
Lung cancer is the leading cause of cancer death among people worldwide. The primary aim of this research is to establish an image processing method for lung cancer detection. This paper focuses on lung region segmentation from computed tomography (CT) scan images. In this work, a new procedure for lung region segmentation is proposed. First, the lung CT scan images will undergo an image thresholding stage before going through two morphological reconstruction and masking stages. In between morphological and masking stages, object extraction, border change, and object elimination will occur. Finally, the lung field will be annotated. The outcomes of the proposed procedure and previous lung segmentation methods i.e., the modified watershed segmentation method is compared with the ground truth images for performance evaluation that will be carried out both in qualitative and quantitative manners. Based on the analyses, the new proposed procedure for lung segmentation, denotes better performance, an increment by 0.02% to 3.5% in quantitative analysis. The proposed procedure produced better-segmented images for qualitative analysis and became the most frequently selected method by the 22 experts. This study shows that the outcome from the proposed method outperforms the existing modified watershed segmentation method.
This document summarizes a research paper that developed a GPU-based algorithm for CT image reconstruction from undersampled and noisy projection data. The algorithm uses an algebraic reconstruction method and implements the LSQR iterative solver on a GPU using CUDA programming. By taking advantage of massive parallel processing on the GPU, the algorithm is able to reconstruct CT images with higher resolution than previous methods without losing image quality or increasing computation time significantly. The paper presents the mathematical model, reconstruction algorithm steps, and GPU implementation details.
YOLOv8-Based Lung Nodule Detection: A Novel Hybrid Deep Learning Model ProposalIRJET Journal
This document proposes a novel deep learning model for automated real-time detection of lung nodules using chest CT scans. A two-stage model is proposed that first uses a CNN to detect nodule regions with 94% accuracy, then fine-tunes a YOLOv8 object detection model on the detected regions. When tested on the LUNA16 dataset, the YOLOv8m configuration achieved 92.3% accuracy, 88.5% sensitivity, and 53.5% mean average precision for nodule detection, outperforming existing methods. The proposed hybrid model shows potential for improving nodule detection accuracy and efficiency for early lung cancer screening.
Abstract
This paper proposes a survey on the classification techniques of lung nodules. We have the different classifications about the nodules in the lungs. It contains the different methods of classification, segmentation and detection techniques. Malignant cell presented in the lungs named , nodules are classified for the treatment processes. Thresholding and Robust segmentation techniques are used in the segmentation process and the feature set is used for classification. Low Dose CT(Computed Tomography) images are applied. This survey has the information about the efficient techniques which are all used for the nodule classification. In these days lung cancer is the dangerous dead disease in the world, So we need to have the knowledge of that cancer. In starting stages the micro nodules are then formed into a cancer cell. Among the cancer affected population about 20% of the people are dead due to lung cancer. If nodules are found in a starting stage, we can be extend the lifetime of the patient. The main process of this paper involves with the nodule classification and segmentation process of the lung nodules. Here we taken the different procedures involved with nodule detections. CT is the most appropriate imaging technique to obtain anatomical information about lung nodules and the surrounding structures. Here we taken the Low Dose CT(LDCT) images for operations. This paper has the various approaches of the nodule classification. In this survey different techniques are presented which are used for detection and classification of the nodules in the lungs. By differentiating the nodules from the anatomical parts of the lungs, the nodules are identified.
Keywords: PLSA, Robust Segmentation and Partitioning.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Prediction of Lung Cancer Using Image Processing Techniques: A Reviewaciijournal
The document discusses previous research on predicting lung cancer using image processing techniques. Various studies are reviewed that used techniques like segmentation, feature extraction and classification on CT scan images to detect lung cancer. Classification approaches discussed include support vector machines, neural networks, fuzzy logic and genetic algorithms. Accuracy of prediction ranged from 80-99% depending on the techniques and image datasets used. The summary highlights several studies that applied methods like segmentation, feature extraction and neural network or SVM classification to CT images to detect lung nodules and predict cancer.
Prediction of Lung Cancer Using Image Processing Techniques: A Reviewaciijournal
Prediction of lung cancer is most challenging problem due to structure of cancer cell, where most of the
cells are overlapped each other. The image processing techniques are mostly used for prediction of lung
cancer and also for early detection and treatment to prevent the lung cancer. To predict the lung cancer
various features are extracted from the images therefore, pattern recognition based approaches are useful
to predict the lung cancer. Here, a comprehensive review for the prediction of lung cancer by previous
researcher using image processing techniques is presented. The summary for the prediction of lung cancer
by previous researcher using image processing techniques is also presented.
Similar to Efficient lung air volume estimation using (20)
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Climate Impact of Software Testing at Nordic Testing Days
Efficient lung air volume estimation using
1. International Journal on Computational Sciences & Applications (IJCSA) Vol.4, No.2, April 2014
DOI:10.5121/ijcsa.2014.4202 13
Efficient Lung Air Volume Estimation Using
Human Respiratory Image Sequences
P.Deepak1
and T.Kumesh2
PG Scholar1
, Assistant Professor of the Department2
Department of Computer Science and Engineering,
PSN Engineering College,
Tirunelveli-627152, India
ABSTRACT
The proposed technique is capable of segmenting the lung’s air and its soft tissues followed by estimating
the lung’s air volume and its variations throughout the image sequence. The work presents a methodology
that consists of three steps. In the first step, CT image sequence are to be given as input where histograms
of all images within the sequence are calculated and they are overlaid in order to form the sequence’s
combined histogram to extract lung air area. In the second step, segment the lung air area by using
optimum threshold based method. In the third step, voxels in the rendered volume will be counted and the
percentage of air consumption will be estimated. The result will indicate a very good ability of the
proposed method for estimating the lung’s air volume and its variations in a respiratory image sequence.
KEYWORDS
Air volume, Computed Tomography (CT) image, lung, segmentation, Volume Rendering Approach
I. INTRODUCTION
Statistics shows that lung disease death rate is still on the rise. According to the latest report by
the American Lung Association, death rates due to lung disease are currently increasing, while
death rates due to other leading causes, such as heart disease, cancer, and stroke are declining.
High mortality rates of lung diseases have encouraged many researchers to focus their efforts on
improving their diagnosis and treatment methods. Many lung disease diagnosis and treatment
methods involve a procedure in which respiratory image sequences are analyzed. The image
sequence may consist of several static breath-hold images or a respiratory-gated free-breathing
image sequence.Estimation of lung air volume and its variations throughout a image sequence has
been proposed by several groups in several applications.The disadvantage is the lack of reliable
approach to get accurate upper and lower threshold values.The air volume variations in the
sequence are then estimated by calculating the whole lung volume differences within the image
sequence and the lung’s air volume in each image needs to be estimated from the whole lung
volume which usually results in higher errors in estimating both the lung’s air volume and its
variations throughout the sequence. A CT image would be very useful in tumor ablative
procedures usually performed after the target lung is completely deflated before starting the
surgery such as brachytherapy for treating lung cancer. Then for radiation pneumonitis is one of
the conditions that can be assessed based on measuring the lung’s air volume. This measurement
2. International Journal on Computational Sciences & Applications (IJCSA) Vol.4, No.2, April 2014
14
can be done noninvasively using Computed Tomography (CT) image-sequence segmentation in
order to determine the extent of this disease and treat it properly to prevent radiation fibrosis.The
proposed method analyze a number of pre-operative breath-hold CT images at different lung
volumes controlled by ventilator and a volume-meter transducer among them two successive CT
images in the sequence are registered with each other to obtain the registration parameters. Each
registration parameter corresponding to the totally deflated lung was then determined using
extrapolation and then described as a function of the lung’s air volume variation. Finally, the CT
image of the totally deflated lung was reconstructed by registering the pre-operative image of the
least inflated lung using the extrapolated parameters.Because of the highly complex geometry of
the airways and alveoli segmentation techniques using deformable models or level set approach
are not suitable for lung’s air segmentation. As such, threshold-based segmentation is frequently
the technique of choice for image-based lung’s air volume estimation. However, finding the
optimum segmentation threshold for a specific application is usually a challenging task. A priori
information, such as physical density or statistical analysis, such as image’s histogram is usually
useful to select a more appropriate threshold For example, the intensity value, which maximizes
the separation between two peaks of a histogram, is typically used as a rough estimation for the
threshold between the corresponding segmentation classes.
R. B. Reger estimated total lung capacity (TLC) using an accurate and rapid radiographic method.
This uses a step-wise multiple regression model which allows total lung capacity to be derived as
follows: posteroanterior and lateral films are divided into the standard sections as described in the
text, the width, depth, and height of sections 1 and 4 are measured in centimetres.
G. Emmanuel followed a method is described for collecting and measuring the quantity of N2
washed out of the lungs every half minute during oxygen breathing. The method described here
requires considerably more time, skill in handling of equipment, and calculation than the
conventional method, and it may therefore be unsuitable for routine use.
R J Pierce used the radiographic method described herein measures the displacement volumes of
the lungs as geometric structures in space, using a digitiser and computer. It was developed from
determinations of the cross-sectional shape of the chest and its contained structures in post-
mortem anatomical sections and from computerised tomography (CT) scans in living subjects.
Alan E. Schlesinger followed a method in which, CT scan the lung margin was traced for each
axial section (using soft-tissue window settings) by one observer using the cursor on the CT
scanner display. The CT computer then calculated the cross-sectional area of the region of
interest, multiplied the area by the slice thickness to calculate the lung volumes for each axial
section, and summed the individual section volumes to obtain the total lung volume.
J. Clausen suggested the techniques available for estimating total lung capacities from standard
chest radiographs in children and infants as well as adults are Computed tomography and
magnetic resonance techniques are used to measure absolute lung volumes and offer the
theoretical advantages that the results in individual subjects are less affected by variances of
thoracic shape than are measurements made using conventional chest radiographs.
H. U. Kauczor designed a method to determine lung volumes using inspiratory and expiratory
helical CT with two-dimensional (2D) and three-dimensional (3D) post processing and to
compare the accuracy of those measurements with pulmonary function test results. However, the
resulting rough segmentation sometimes requires additional fine tuning steps to make the
segmentation contours more accurate. For estimating lung air volume and/or its variations,
Gamsu et al. estimated total lung capacity (TLC) and forced expiratory volume using
3. International Journal on Computational Sciences & Applications (IJCSA) Vol.4, No.2, April 2014
15
posteroanterior and lateral X-ray images of the chest. Their estimation method consisted of
manual segmentation of the lung’s X-ray images followed by a set of distance measurements and
volume calculations. Then Kauczor et al. used a threshold-based technique to segment the whole
lung automatically from a static helical CT image sequence acquired at deep inspiration and deep
expiration in order to estimate different lung volumes including the tidal volume.The concept
proposed in this study was to be effective, but its implementation using static breath -hold CT
images may not be practical in clinical settings. In contrast to the static breath-hold images, the
free-breathing 4DCT is more suitable in the clinic as it is straightforward to implement and less
time consuming and more convenient for patients. In a proposed method a technique for accurate
image sequence segmentation is introduced based on a novel image sequence analysis.
The method is proposed to estimate the lung’s air volume and its variations in CT image
sequences using sequence combined histogram. Experiments were conducted on porcine left
lungs to explain the validity of the proposed method. Using breath-hold CT image sequence with
known lung’s air volumes.Atlast the results indicate a very good ability of the proposed technique
for estimating the lung’s air volume and its variations in a CT image sequence.
II.METHODOLOGY
A.Initialization
Image segmentation is defined as the process of assigning each image pixel to their particular
class. In segmentation methods, finding threshold, initial seed is a initial step. There is currently
no segmentation method that yields acceptable results for any medical image usually results in
significant errors during the segmentation process.
The proposed lung’s air volume estimation method is based on a novel image-sequences
segmentation technique that determines the threshold values systematically. The concept behind
this technique takes advantage of the fact that the segmentation classes of background-air, lung’s
air, and soft-tissue appear in all images in the sequence, though with variable shape and size. The
histogram threshold method is a good candidate for gray level image segmentation. It is based on
the shape of the histogram properties, such as the peaks, valleys and curvatures of the smoothed
histogram. It is formed by the Cartesian product of the original 1D gray level histogram and 1D
local average gray level histogram generated by applying a local window to each pixel of the
image and then calculating the average of the grey levels within the window. The change in the
pixel value in the horizontal or vertical directions appears slow and the gradation change
continuity appears strong compared to the change in the diagonal direction.
Thresholding is a simple segmentation technique that divides the image into specific classes by
comparing each image pixel value with a number of intensity values called thresholds. The most
important step in the thresholding method is fine tuning the threshold values because those values
have important influence on the accuracy of the segmentation. During the thresholding process,
individual pixels in an image are marked as "object" pixels if their value is greater than some
threshold value and as "background" pixels otherwise. This convention is known as threshold
above. Variants include threshold below, which is opposite of threshold above; threshold inside,
where a pixel is labeled "object" if its value is between two thresholds; and threshold outside,
which is the opposite of threshold inside.
4. International Journal on Computational Sciences & Applications (IJCSA) Vol.4, No.2, April 2014
16
In genetic algorithm a population of string which encode candidate solutions to an optimization
problem, evolves toward better solutions. The fitness function is defined over the genetic
representation and measures the quality of the represented solution. Once the genetic
representation and the fitness function are defined, a GA proceeds to initialize a population of
solutions and then to improve it through repetitive application of the mutation, crossover,
inversion and selection operators. It is used to perform optimization in a proposed method. An
Optimization problem consists of maximizing or minimizing a real function by systematically
choosing input values from within an allowed set and computing the value of the function. More
generally, optimization includes finding "best available" values of some objective function given
a defined domain, including a variety of different types of objective functions and different types
of domains.
For segmentation Active contour models, or “snakes,” provide a novel method of delineating and
linking edges in images when the gradient information is degraded by image noise or when the
boundary of the object being imaged is naturally vague. A major task in applying an active
contour model is the need to correctly set the contour regularization parameters that are
responsible for controlling the smoothness of the result.Active contour model, also called snakes,
is a framework for delineating an object outline from a possibly noisy 2D image.This framework
attempts to minimize an energy associated to the current contour comparing to other techniques
contour model can find edges, line ,and subjective contours efficiently. For finding contour, the
active contour model minimizes the energy functional and exhibits dynamic behavior.
Figure 1: Results for Segmentation Using Contours.
B. Framework
Image segmentation plays a major role in many applications of biomedical imaging, such as
diagnosis, localization of pathology treatment planning computer-aided surgery quantification of
tissue volumes, and study of anatomical Structure. In a proposed method each image in the
sequence consists of three segmentation classes including background, lung’s air, and soft tissue.
The method starts with the input as image sequence then histograms of all images within the
sequence are calculated and are overlaid to form the sequence’s combined histogram, then
smoothing the histogram to remove noise-like variations, then the convergence points are
identified when all the separate histograms converge together, for this standard deviation of all
the histograms within the sequence is calculated for each intensity value, the converging areas are
searched for the points with a minimum standard deviation, which are selected as the convergence
points A and B.
5. International Journal on Computational Sciences & Applications (IJCSA) Vol.4, No.2, April 2014
17
Figure 2: Block Diagram for Estimation of Lung’s Air Volume and Its Variations.
[
The convergence point A, which corresponds to minimal number of voxels variation w.r.t.
volume variation during respiration, is the best point that represents the interface regions. After
point A, intensity values will correspond to air mixed with small amount of alveoli tissue. This
continues until point B, where the lung tissue starts to be dominant with small amount of air. As
such, each histogram curve between points A and B provides a good approximation to the lung’s
air volume. We argue that point B, the second convergence point in the combined histogram, is a
good initial guess for the upper threshold. The convergence points are then used in the next block
as initial guesses of the optimization algorithm.The points are used to obtain the optimized
threshold values. Given that lower and upper segmentation thresholds have integer values in the
image space, the optimization process can be simply performed by using genetic algorithm.The
segmentation is performed using the optimized threshold values then counts the voxels segmented
as the lung’s air for each image individually. Atlast the lung’s air volume is calculated for each
image by multiplying the number of voxels counted as the air by the voxel size and calculates the
air volume variations within the sequence by subtracting the lung’s air volumes between
successive images.
C.Result Analysis
The analysis includes a static breath-hold CT images from a respiratory sequence where the
lung’s air volume was known in each image. The lung was obtained from an adult 80 kg pig
where the air volume inside the lung was controlled by the ventilator.
Figure 3: Shows The Combined Sequence Histogram Obtained For 3d Images.
6. International Journal on Computational Sciences & Applications (IJCSA) Vol.4, No.2, April 2014
18
Figure 4: Shows The Convergence Points Of Combined Histogram.
In above Fig the convergence points are indicated by arrows at starting and ending in a graph
denoted as A and B.The histogram regions before point A, between points A and B, and after
point B correspond to the background-air volume, lung’s air volume, and soft-tissue volume
respectively. The convergence points are then analyzed to get the optimum points to be used as
the upper and lower thresholds for segmenting the lung’s air since those optimum points itself
satisfy all the images’ histograms. After extracting the convergence points they are used as initial
guesses for finding optimized threshold values then that threshold values are used for lung’s air
estimation and for its variations. Tools used for Simulation is MATLAB. MATLAB is a high-
performance language for technical computing. It integrates computation, visualization, and
programming in an easy-to-use environment where problems and solutions are expressed in
familiar mathematical notation.
Figure 5: CT Images of Lung with Air Volume
7. International Journal on Computational Sciences & Applications (IJCSA) Vol.4, No.2, April 2014
19
Table I: Result Analysis of Lung with Known Air Volume
Fig 5 demonstrates one middle slice of the CT images acquired at different volumes, where the air
inside the lung is segmented using the obtained threshold values. The lung’s air volumes
calculated based on the performed segmentation using the thresholds obtained from the
optimization algorithm. Table I includes the comparative results from the maximum separation
method, where the threshold values were calculated based on maximum separation between two
histogram peaks Based on the segmented threshold values the lung’s air volumes are estimated.
Table II
Results of the Estimated Lung’s Air Volume and Their Corresponding Tissue Volumes
In The 4-D-CT Respiratory Sequence
III.CONCLUSIONS
In this research, the technique was used to estimate the lung’s air volume and its variations in
respiratory CT image sequences using a combined sequence histogram. In a proposed method a
novel concept of image sequence analysis was introduced to obtain accurate lower and upper
threshold values for segmentation. Then the proposed method was validated using a breath-hold
CT image sequence with known lung air volumes. At last the obtained results shows very good
ability of the method for estimating the lung’s air volume and its variations throughout a
respiratory image sequence. This technique can be used effectively in clinical applications, such
as LDR lung brachytherapy, where the lung’s air volume and/or its variations in a respiratory
sequence are needed. The concept of finding the optimum segmentation threshold values from an
image sequence’s combined histogram introduced in this paper can also be used in other
biomedical applications, where important physiological parameters need to be extracted. An
example of such applications is estimation of the ventricle’s ejection fraction from sequential
cardiac images. Here, the proposed technique can be applied in order to find the optimum lower
and upper thresholds for an effective segmentation of the ventricle’s blood volume followed by a
8. International Journal on Computational Sciences & Applications (IJCSA) Vol.4, No.2, April 2014
20
calculation of the ventricle’s blood volume variations throughout the end-diastole/end systole
image sequence. This automatic method may improve or even substitute existing complex
semiautomatic algorithms or empirical threshold-based methods currently used for ejection
fraction estimation.
ACKNOWLEDGEMENT
The project was supported by The Institution of Engineers (India), 8, Gokhale Road, Kolkata-
700020.
REFERENCES
[1] H. U. Kauczor, C. P. Heussel, B. Fisher et al., “Assessment of lung volumes using helical CT at
inspiration and expiration: Comparison with pulmonary function tests,” AJR, vol. 171, pp. 1091–
1095, 1998.
[2] M. L. Goris, H. J. Zhu, F. Blankenberg et al., “An automated approach to quantitative air trapping
measurements in mild cystic fibrosis,” Chest,vol. 123, pp. 1655–1663, 2003.
[3] J.R. McClelland, J.M. Blackall, S. Tarte et al., “A continuous 4-D motion model from multiple
respiratory cycles for use in lung radiotherapy,” Med.Phys., vol. 33, pp. 3348–3358, 2006.
[4] A.J. Swift, N.Woodhouse, S. Fichele et al., “Rapid lung volumetry using ultrafast dynamicmagnetic
resonance imaging during forced vital capacity maneuver: Correlation with spirometry,” Invest
Radiol., vol. 42, no. 1,pp. 37–41, 2007.
[5] J. M. Reinhardt, K. Ding, K. Cao et al., “Registration-based estimates of local lung tissue expansion
compared to xenon CT measures of specific ventilation,” Med. Image Anal., vol. 12, pp. 752–763,
2008.
[6] G. Li, N. C. Arora, H. Xie et al., “Quantitative prediction of respiratory tidal volume based on the
external torso volume change: A potential volumetric surrogate,” Phys. Med. Biol., vol. 54, pp. 1963–
1978, 2009.
[7] A. S. Naini, R. V. Patel, and A. Samani, “CT image construction of the lung in a totally deflated
mode,” in Proc. 2009 IEEE Int. Symp. Biomed. Imag.: Nano Macro,, Boston, MA, Jun.2009, pp. 578–
581.
[8] T. Zhang, H. Keller, R. Jeraj et al., “Breathing synchronized delivery—A new technique for radiation
treatment of the targets with respiratory motion,” Int. J. Radiat. Oncol. Biol. Phys., vol. 57, no. 2, pp.
S185–S186,2003.
[9] G. Rodrigues, M. l Lock, D. D’Souza et al., “Prediction of radiation pneumonitis by dose- volume
histogram parameters in lung cancer—A systematic review,” Radiother. Oncol., vol. 71, no. 2, pp.
127–138, 2004.
[10] R. B. Reger, A. Young, And W. K. C. Morgan “An accurate and rapid radiographic method of
determining total lung capacity” Thorax (1972), 27, 163-168.
[11] G. Emmanuel, W. A. Briscoe And A. Cournand “A Method For The Determination Of The Volume
Of Air In The Lungs: Measurements In Chronic Pulmonary Emphysema”
[12] R J Pierce, D J Brown, M Holmes, G Cumming, And D M Denison “Estimation of lung volumes from
chest radiographs using shape information” Thorax, 1979, 34, 726-734.
[13] Alan E. Schlesinger,Deborah K. White,George B. Mallory,Charles F. Hildeboldt, Charles B.
Huddleston “Estimation of Total Lung Capacity from Chest Radiography and Chest CT in Children:
Comparison with Body Plethysmography” Pg no:151-154.
[14] J. Clausen “Measurement of absolute lung volumes by imaging techniques” Eur Respir J 1997; 10:
2427–2431.
[15] H. U. Kauczor, C. P. Heussel, B. Fisher et al., “Assessment of lung volumes using helical CT at
inspiration and expiration: Comparison with pulmonary function tests,” AJR, vol. 171, pp. 1091–
1095, 1998.