Built a CNN based machine learning model to diagnose Pneumonia disease using chest x-rays.
Methodologies & Tools: KDD, Python, VGG19 model, Convolutional Neural Network.
This document is a research project submission for a Master's in Data Analytics. It proposes using a convolutional neural network to classify multiple types of skin lesions from dermoscopic images. Specifically, it will use a modified VGG19 network, replacing the classification layers with global average pooling, dropout, and two fully connected layers with softmax activation. The model will be tested on the ISIC 2018 dataset containing over 10,000 images across 7 lesion classes, after preprocessing the images. The goal is to assist dermatologists in identifying lesion types.
This document summarizes Mansi Chowkkar's MSc research project on detecting breast cancer from histopathological images using deep learning and transfer learning. The research aims to classify images as malignant or benign with high accuracy and efficiency. It implements CNN and DenseNet-121 models, with the latter using transfer learning with pre-trained ImageNet weights. The research achieved 90.9% test accuracy with CNN and 88.03% accuracy with transfer learning. Related work discusses deep learning applications in healthcare, image processing techniques, prior research on DenseNet, and the use of transfer learning for medical image classification with limited data.
Early detection of breast cancer using mammography images and software engine...TELKOMNIKA JOURNAL
The breast cancer has affected a wide region of women as a particular case. Therefore, different researchers have focused on the early detection of this disease to overcome it in efficient way. In this paper, an early breast cancer detection system has been proposed based on mammography images. The proposed system adopts deep-learning technique to increase the accuracy of detection. The convolutional neural network (CNN) model is considered for preparing the datasets of training and test. It is important to note that the software engineering process model has been adopted in constructing the proposed algorithm. This is to increase the reliably, flexibility and extendibility of the system. The user interfaces of the system are designed as a website used at country side general purpose (GP) health centers for early detection to the disease under lacking in specialist medical staff. The obtained results show the efficiency of the proposed system in terms of accuracy up to more than 90% and decrease the efforts of medical staff as well as helping the patients. As a conclusion, the proposed system can help patients by early detecting the breast cancer at far places from hospital and referring them to nearest specialist center.
This document is a 36-page bachelor's thesis written by Duc Minh Luong Nguyen titled "Detect COVID-19 from Chest X-Ray images using Deep Learning". The thesis was submitted to Metropolia University of Applied Sciences in May 2020. It aims to build a deep convolutional neural network to detect COVID-19 using only chest X-ray images. The model achieves an accuracy of 93% at detecting COVID-19 patients versus healthy patients, despite being trained on a small dataset of 115 images for each class.
MAMMOGRAPHY LESION DETECTION USING FASTER R-CNN DETECTORcscpconf
Recently availability of large scale mammography databases enable researchers to evaluates advanced tumor detections applying deep convolution networks (DCN) to mammography images which is one of the common used imaging modalities for early breast cancer. With the recent advance of deep learning, the performance of tumor detection has been developed by a great extent, especially using R-CNNs or Region convolution neural networks. This study evaluates the performance of a simple faster R-CNN detector for mammography lesion detection using a MIAS databases.
BREAST CANCER DIAGNOSIS USING MACHINE LEARNING ALGORITHMS –A SURVEYijdpsjournal
Breast cancer has become a common factor now-a-days. Despite the fact, not all general hospitals
have the facilities to diagnose breast cancer through mammograms. Waiting for diagnosing a breast
cancer for a long time may increase the possibility of the cancer spreading. Therefore a computerized
breast cancer diagnosis has been developed to reduce the time taken to diagnose the breast cancer and
reduce the death rate. This paper summarizes the survey on breast cancer diagnosis using various machine
learning algorithms and methods, which are used to improve the accuracy of predicting cancer. This survey
can also help us to know about number of papers that are implemented to diagnose the breast cancer.
This document is a research project submission for a Master's in Data Analytics. It proposes using a convolutional neural network to classify multiple types of skin lesions from dermoscopic images. Specifically, it will use a modified VGG19 network, replacing the classification layers with global average pooling, dropout, and two fully connected layers with softmax activation. The model will be tested on the ISIC 2018 dataset containing over 10,000 images across 7 lesion classes, after preprocessing the images. The goal is to assist dermatologists in identifying lesion types.
This document summarizes Mansi Chowkkar's MSc research project on detecting breast cancer from histopathological images using deep learning and transfer learning. The research aims to classify images as malignant or benign with high accuracy and efficiency. It implements CNN and DenseNet-121 models, with the latter using transfer learning with pre-trained ImageNet weights. The research achieved 90.9% test accuracy with CNN and 88.03% accuracy with transfer learning. Related work discusses deep learning applications in healthcare, image processing techniques, prior research on DenseNet, and the use of transfer learning for medical image classification with limited data.
Early detection of breast cancer using mammography images and software engine...TELKOMNIKA JOURNAL
The breast cancer has affected a wide region of women as a particular case. Therefore, different researchers have focused on the early detection of this disease to overcome it in efficient way. In this paper, an early breast cancer detection system has been proposed based on mammography images. The proposed system adopts deep-learning technique to increase the accuracy of detection. The convolutional neural network (CNN) model is considered for preparing the datasets of training and test. It is important to note that the software engineering process model has been adopted in constructing the proposed algorithm. This is to increase the reliably, flexibility and extendibility of the system. The user interfaces of the system are designed as a website used at country side general purpose (GP) health centers for early detection to the disease under lacking in specialist medical staff. The obtained results show the efficiency of the proposed system in terms of accuracy up to more than 90% and decrease the efforts of medical staff as well as helping the patients. As a conclusion, the proposed system can help patients by early detecting the breast cancer at far places from hospital and referring them to nearest specialist center.
This document is a 36-page bachelor's thesis written by Duc Minh Luong Nguyen titled "Detect COVID-19 from Chest X-Ray images using Deep Learning". The thesis was submitted to Metropolia University of Applied Sciences in May 2020. It aims to build a deep convolutional neural network to detect COVID-19 using only chest X-ray images. The model achieves an accuracy of 93% at detecting COVID-19 patients versus healthy patients, despite being trained on a small dataset of 115 images for each class.
MAMMOGRAPHY LESION DETECTION USING FASTER R-CNN DETECTORcscpconf
Recently availability of large scale mammography databases enable researchers to evaluates advanced tumor detections applying deep convolution networks (DCN) to mammography images which is one of the common used imaging modalities for early breast cancer. With the recent advance of deep learning, the performance of tumor detection has been developed by a great extent, especially using R-CNNs or Region convolution neural networks. This study evaluates the performance of a simple faster R-CNN detector for mammography lesion detection using a MIAS databases.
BREAST CANCER DIAGNOSIS USING MACHINE LEARNING ALGORITHMS –A SURVEYijdpsjournal
Breast cancer has become a common factor now-a-days. Despite the fact, not all general hospitals
have the facilities to diagnose breast cancer through mammograms. Waiting for diagnosing a breast
cancer for a long time may increase the possibility of the cancer spreading. Therefore a computerized
breast cancer diagnosis has been developed to reduce the time taken to diagnose the breast cancer and
reduce the death rate. This paper summarizes the survey on breast cancer diagnosis using various machine
learning algorithms and methods, which are used to improve the accuracy of predicting cancer. This survey
can also help us to know about number of papers that are implemented to diagnose the breast cancer.
This document describes a project report submitted by three students for their Bachelor of Engineering degree. The project involves developing a system for classifying brain images using machine learning techniques. It discusses challenges in detecting brain tumors and the need for automated classification methods. It also provides an overview of techniques for image segmentation, clustering, and feature extraction that will be used in the project.
Comparison of Image Segmentation Algorithms for Brain Tumor DetectionIJMTST Journal
This paper deals with the implementation of Simple Algorithms for detection of size and shape of tumor in brain using MRI images. Generally, CT scan or MRI that is directed into intracranial cavity produces a complete image of brain. This image is visually examined by the physician for detection & diagnosis of brain tumor. However this method of detection resists the accurate determination of stage & size of tumor. To avoid that, this project uses computer aided method for segmentation (detection) of brain tumor by applying Fuzzy C-Means, K-Means, Gaussian Kernel and Pillar K-means algorithms. This segmentation process includes a new mechanism for clustering the elements of high-resolution images in order to improve precision and reduce computation time. The system applies FCM, Gaussian kernel and K-means clustering to the image later optimized by Pillar Algorithm. It designates the initial centroids’ positions by calculating the Euclidian distance metric between each data point and all previous centroids. Then it selects data points which have the maximum distance as new initial centroids. This algorithm distributes all initial centroids according to the maximum accumulated distance metric. In addition, it also reduces the time for analysis. At the end of the process the tumor is extracted from the MRI image and its exact position and the shape is also determined. This paper evaluates the proposed approach for Brain tumor detection by comparing with K-means, Fuzzy C means, Gaussian Kernel and manually segmented algorithms. The experimental results clarify the effectiveness of proposed approach to improve the segmentation quality in aspects of precision and computational time.
The document describes a study that used convolutional neural networks (CNNs) to detect brain tumors in MRI images. Three CNN models were developed and their performance was evaluated using metrics like accuracy, precision, recall, F1-score, and confusion matrices. Model 3 achieved the highest test accuracy of 94% for tumor detection. In total, over 2000 MRI images were used in the study after data augmentation. The CNN models incorporated convolution, pooling, and fully connected layers to analyze image features and classify tumors. This research demonstrates that CNNs can accurately detect brain tumors in medical images.
Brain Image Fusion using DWT and Laplacian Pyramid Approach and Tumor Detecti...INFOGAIN PUBLICATION
Image fusion is the process of combining important information from two or more images into a single image. The resulting image will be more enhanced than any of the input pictures. The idea of combining multiple image modalities to furnish a single, more enhanced image is well established, special fusion methods have been proposed in literature. This paper is based on image fusion using laplacian pyramid and Discreet Wavelet Transform (DWT) methods. This system uses an easy and effective algorithm for multi-focus image fusion which uses fusion rules to create fused image. Subsequently, the fused image is obtained by applying inverse discreet wavelet transform. After fused image is obtained, watershed segmentation algorithm is applied to detect the tumor part in fused image.
This document describes a study that used deep learning models to diagnose Alzheimer's disease using MRI scans. Specifically, it used convolutional neural networks (CNNs) trained on MRI data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset. The study achieved an F1-score of 89% for classifying Alzheimer's vs normal cognition. It also used a CycleGAN for data augmentation, which generated additional synthetic MRI images and improved the CNN classification accuracy to 95% F1-score. The document provides background on Alzheimer's diagnosis, CNNs, GANs including CycleGAN, related work applying machine learning to Alzheimer's diagnosis, and the methodology used in this study for data preprocessing, model training, and evaluating the impact of GAN
This document reviews various automated techniques that have been developed for brain tumor detection. It summarizes research done by several researchers on methods like sequential floating forward selection, color coding schemes using brain atlases, neural networks, region growing segmentation combined with area calculation, symmetry analysis of tumor areas in MRI images, and combining clustering and classification algorithms. The paper concludes that image segmentation plays an important role in medical applications like tumor diagnosis and that more robust techniques are needed for high accuracy and reliability.
This document proposes using a DenseNet-II neural network model to classify mammogram images as benign or malignant. It first preprocesses mammogram images through normalization and data augmentation. It then improves the original DenseNet model by replacing the first convolutional layer with an Inception structure, creating a new DenseNet-II model. This model, along with other common models, are tested on mammogram data and the DenseNet-II model achieves the highest average accuracy of 94.55% for benign-malignant classification.
The document describes a study that aims to detect brain tumors and edema in MRI images using MATLAB. It discusses how MRI is commonly used to identify brain anomalies. The proposed methodology uses basic image processing techniques in MATLAB, including preprocessing, enhancement, segmentation, and morphological operations to detect and segment tumors and edema. The final output highlights the boundaries between tumors and edema superimposed on the original MRI image to aid physicians in diagnosis and surgical planning.
A New Algorithm for Fully Automatic Brain Tumor Segmentation with 3-D Convolu...Christopher Mehdi Elamri
This document describes a new algorithm for fully automatic brain tumor segmentation using 3D convolutional neural networks. The algorithm uses 3D convolutional filters to preserve spatial information, and a high-bias CNN architecture to increase effective data size and reduce model variance. On a dataset of 274 brain MR images, the algorithm achieved a median Dice score of 89% for whole tumor segmentation, significantly outperforming past methods. This demonstrates the effectiveness of generalizing low-bias high-variance methods like CNNs to learn from medium-sized datasets.
A novel framework for efficient identification of brain cancer region from br...IJECEIAES
Diagnosis of brain cancer using existing imaging techniques, e.g., Magnetic Resonance Imaging (MRI) is shrouded with various degrees of challenges. At present, there are very few significant research models focusing on introducing some novel and unique solutions towards such problems of detection. Moreover, existing techniques are found to have lesser accuracy as compared to other detection schemes. Therefore, the proposed paper presents a framework that introduces a series of simple and computationally cost-effective techniques that have assisted in leveraging the accuracy level to a very higher degree. The proposed framework takes the input image and subjects it to non-conventional segmentation mechanism followed by optimizing the performance using directed acyclic graph, Bayesian Network, and neural network. The study outcome of the proposed system shows the significantly higher degree of accuracy in detection performance as compared to frequently existing approaches.
Survey of various methods used for integrating machine learning into brain tu...Drjabez
This document surveys various machine learning methods used for integrating machine learning into brain tumor detection and classification from MRI images. It discusses preprocessing techniques like median filtering, Gaussian high pass filtering, and morphology dilation to enhance images. Segmentation techniques covered include thresholding, edge detection, region-based, watershed, Berkeley wavelet transform, K-means clustering, and neural networks. Feature extraction calculates correlation, skewness. Classification algorithms discussed are multi-layer perceptron, naive Bayes, and support vector machines. The document provides an overview of key steps and methods for machine learning-based brain tumor detection and segmentation from MRI images.
3D Segmentation of Brain Tumor ImagingIJAEMSJORNAL
A brain tumor is a collection of anomalous cells that grow in or around the brain. Brain tumors affect the humans badly, it can disrupt proper brain function and be life-threatening. In this project, we have proposed a system to detect, segment, and classify the tumors present in the brain. Once the brain tumor is identified at the very beginning, proper treatments can be done and it may be cured.
IRJET- Brain Tumor Detection using Image Processing and MATLAB ApplicationIRJET Journal
This document presents a proposed method for detecting brain tumors in MRI scans using image processing techniques in MATLAB. It begins with an introduction to brain tumors, MRI, and causes. The proposed method uses anisotropic filtering to reduce noise, SVM classification to segment tumor regions, and morphological operations like dilation and erosion to extract the tumor boundaries. The MATLAB application provides a graphical user interface with tabs for viewing input images, filtered outputs, and detected tumors. Testing on sample images achieved tumor detection times ranging from 152-733 milliseconds depending on image properties. Future work could involve extending the method to 3D images, implementing machine learning for dynamic thresholding, and detecting smaller malignant tumors.
Techniques of Brain Cancer Detection from MRI using Machine LearningIRJET Journal
The document discusses techniques for detecting brain cancer from MRI scans using machine learning. It first provides background on brain tumors and MRI. It then outlines the cancer detection process, including pre-processing the MRI data, segmenting the images, extracting features, and classifying tumors using techniques like CNNs, SVMs, MLP, and Naive Bayes. The document reviews related work applying these techniques and compares their results, finding accuracy can be improved with larger, higher resolution datasets.
Despite the availability of radiology devices in some health care centers, thorax diseases are considered as one of the most common health problems, especially in rural areas. By exploiting the power of the Internet of things and specific platforms to analyze a large volume of medical data, the health of a patient could be improved earlier. In this paper, the proposed model is based on pre-trained ResNet-50 for diagnosing thorax diseases. Chest x-ray images are cropped to extract the rib cage part from the chest radiographs. ResNet-50 was re-train on Chest x-ray14 dataset where a chest radiograph images are inserted into the model to determine if the person is healthy or not. In the case of an unhealthy patient, the model can classify the disease into one of the fourteen chest diseases. The results show the ability of ResNet-50 in achieving impressive performance in classifying thorax diseases.
Glioblastomas brain tumour segmentation based on convolutional neural network...IJECEIAES
Brain tumour segmentation can improve diagnostics efficiency, rise the prediction rate and treatment planning. This will help the doctors and experts in their work. Where many types of brain tumour may be classified easily, the gliomas tumour is challenging to be segmented because of the diffusion between the tumour and the surrounding edema. Another important challenge with this type of brain tumour is that the tumour may grow anywhere in the brain with different shape and size. Brain cancer presents one of the most famous diseases over the world, which encourage the researchers to find a high-throughput system for tumour detection and classification. Several approaches have been proposed to design automatic detection and classification systems. This paper presents an integrated framework to segment the gliomas brain tumour automatically using pixel clustering for the MRI images foreground and background and classify its type based on deep learning mechanism, which is the convolutional neural network. In this work, a novel segmentation and classification system is proposed to detect the tumour cells and classify the brain image if it is healthy or not. After collecting data for healthy and non-healthy brain images, satisfactory results are found and registered using computer vision approaches. This approach can be used as a part of a bigger diagnosis system for breast tumour detection and manipulation.
COVID-19 detection from scarce chest X-Ray image data using few-shot deep lea...Shruti Jadon
In the current COVID-19 pandemic situation, there is an urgent need to screen infected patients quickly and accurately. Using deep learning models trained on chest X-ray images can become an efficient method for screening COVID-19 patients in these situations. Deep learning approaches are already widely used in the medical community. However, they require a large amount of data to be accurate.
details about brain tumor
literature survey on many reference papers related to brain tumor detection using various techniques
our proposed novel methodology for brain tumor detection
Brain Tumor Detection and Classification using Adaptive BoostingIRJET Journal
1. The document describes a system for detecting and classifying brain tumors using MRI images.
2. The system uses techniques like preprocessing, segmentation using k-means clustering, feature extraction with discrete wavelet transform and principal component analysis for dimension reduction, and classification with decision trees and adaptive boosting.
3. Adaptive boosting combines multiple weak learners or decision trees into a strong classifier and focuses on misclassified examples to improve accuracy, achieving 100% accuracy for tumor detection and classification in the system.
The document describes a student project that aims to classify and detect lung disorders from chest x-rays using generative adversarial networks (GANs). It provides an overview of the problem statement, literature review on existing methodologies, and the proposed system architecture. The problem is that early diagnosis of lung diseases is important but current methods are time-consuming. The project aims to develop a hybrid GAN model to overcome issues in existing models and accurately detect lung disorders from x-rays. It will classify diseases like pneumonia and lung cancer.
Pneumonia Detection Using Convolutional Neural Network WritersIRJET Journal
This document describes a study that developed a convolutional neural network (CNN) model to detect and classify pneumonia from chest x-ray images without any training. The model was able to extract relevant features from chest x-ray images and determine if a patient has pneumonia. The CNN model achieved high accuracy for pneumonia classification compared to other advanced methods that rely on transfer learning or manual feature engineering. The study used a dataset of 5,500 chest x-ray images and performed image processing techniques like background removal and cropping before extracting features with the CNN and classifying images.
This document describes a project report submitted by three students for their Bachelor of Engineering degree. The project involves developing a system for classifying brain images using machine learning techniques. It discusses challenges in detecting brain tumors and the need for automated classification methods. It also provides an overview of techniques for image segmentation, clustering, and feature extraction that will be used in the project.
Comparison of Image Segmentation Algorithms for Brain Tumor DetectionIJMTST Journal
This paper deals with the implementation of Simple Algorithms for detection of size and shape of tumor in brain using MRI images. Generally, CT scan or MRI that is directed into intracranial cavity produces a complete image of brain. This image is visually examined by the physician for detection & diagnosis of brain tumor. However this method of detection resists the accurate determination of stage & size of tumor. To avoid that, this project uses computer aided method for segmentation (detection) of brain tumor by applying Fuzzy C-Means, K-Means, Gaussian Kernel and Pillar K-means algorithms. This segmentation process includes a new mechanism for clustering the elements of high-resolution images in order to improve precision and reduce computation time. The system applies FCM, Gaussian kernel and K-means clustering to the image later optimized by Pillar Algorithm. It designates the initial centroids’ positions by calculating the Euclidian distance metric between each data point and all previous centroids. Then it selects data points which have the maximum distance as new initial centroids. This algorithm distributes all initial centroids according to the maximum accumulated distance metric. In addition, it also reduces the time for analysis. At the end of the process the tumor is extracted from the MRI image and its exact position and the shape is also determined. This paper evaluates the proposed approach for Brain tumor detection by comparing with K-means, Fuzzy C means, Gaussian Kernel and manually segmented algorithms. The experimental results clarify the effectiveness of proposed approach to improve the segmentation quality in aspects of precision and computational time.
The document describes a study that used convolutional neural networks (CNNs) to detect brain tumors in MRI images. Three CNN models were developed and their performance was evaluated using metrics like accuracy, precision, recall, F1-score, and confusion matrices. Model 3 achieved the highest test accuracy of 94% for tumor detection. In total, over 2000 MRI images were used in the study after data augmentation. The CNN models incorporated convolution, pooling, and fully connected layers to analyze image features and classify tumors. This research demonstrates that CNNs can accurately detect brain tumors in medical images.
Brain Image Fusion using DWT and Laplacian Pyramid Approach and Tumor Detecti...INFOGAIN PUBLICATION
Image fusion is the process of combining important information from two or more images into a single image. The resulting image will be more enhanced than any of the input pictures. The idea of combining multiple image modalities to furnish a single, more enhanced image is well established, special fusion methods have been proposed in literature. This paper is based on image fusion using laplacian pyramid and Discreet Wavelet Transform (DWT) methods. This system uses an easy and effective algorithm for multi-focus image fusion which uses fusion rules to create fused image. Subsequently, the fused image is obtained by applying inverse discreet wavelet transform. After fused image is obtained, watershed segmentation algorithm is applied to detect the tumor part in fused image.
This document describes a study that used deep learning models to diagnose Alzheimer's disease using MRI scans. Specifically, it used convolutional neural networks (CNNs) trained on MRI data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset. The study achieved an F1-score of 89% for classifying Alzheimer's vs normal cognition. It also used a CycleGAN for data augmentation, which generated additional synthetic MRI images and improved the CNN classification accuracy to 95% F1-score. The document provides background on Alzheimer's diagnosis, CNNs, GANs including CycleGAN, related work applying machine learning to Alzheimer's diagnosis, and the methodology used in this study for data preprocessing, model training, and evaluating the impact of GAN
This document reviews various automated techniques that have been developed for brain tumor detection. It summarizes research done by several researchers on methods like sequential floating forward selection, color coding schemes using brain atlases, neural networks, region growing segmentation combined with area calculation, symmetry analysis of tumor areas in MRI images, and combining clustering and classification algorithms. The paper concludes that image segmentation plays an important role in medical applications like tumor diagnosis and that more robust techniques are needed for high accuracy and reliability.
This document proposes using a DenseNet-II neural network model to classify mammogram images as benign or malignant. It first preprocesses mammogram images through normalization and data augmentation. It then improves the original DenseNet model by replacing the first convolutional layer with an Inception structure, creating a new DenseNet-II model. This model, along with other common models, are tested on mammogram data and the DenseNet-II model achieves the highest average accuracy of 94.55% for benign-malignant classification.
The document describes a study that aims to detect brain tumors and edema in MRI images using MATLAB. It discusses how MRI is commonly used to identify brain anomalies. The proposed methodology uses basic image processing techniques in MATLAB, including preprocessing, enhancement, segmentation, and morphological operations to detect and segment tumors and edema. The final output highlights the boundaries between tumors and edema superimposed on the original MRI image to aid physicians in diagnosis and surgical planning.
A New Algorithm for Fully Automatic Brain Tumor Segmentation with 3-D Convolu...Christopher Mehdi Elamri
This document describes a new algorithm for fully automatic brain tumor segmentation using 3D convolutional neural networks. The algorithm uses 3D convolutional filters to preserve spatial information, and a high-bias CNN architecture to increase effective data size and reduce model variance. On a dataset of 274 brain MR images, the algorithm achieved a median Dice score of 89% for whole tumor segmentation, significantly outperforming past methods. This demonstrates the effectiveness of generalizing low-bias high-variance methods like CNNs to learn from medium-sized datasets.
A novel framework for efficient identification of brain cancer region from br...IJECEIAES
Diagnosis of brain cancer using existing imaging techniques, e.g., Magnetic Resonance Imaging (MRI) is shrouded with various degrees of challenges. At present, there are very few significant research models focusing on introducing some novel and unique solutions towards such problems of detection. Moreover, existing techniques are found to have lesser accuracy as compared to other detection schemes. Therefore, the proposed paper presents a framework that introduces a series of simple and computationally cost-effective techniques that have assisted in leveraging the accuracy level to a very higher degree. The proposed framework takes the input image and subjects it to non-conventional segmentation mechanism followed by optimizing the performance using directed acyclic graph, Bayesian Network, and neural network. The study outcome of the proposed system shows the significantly higher degree of accuracy in detection performance as compared to frequently existing approaches.
Survey of various methods used for integrating machine learning into brain tu...Drjabez
This document surveys various machine learning methods used for integrating machine learning into brain tumor detection and classification from MRI images. It discusses preprocessing techniques like median filtering, Gaussian high pass filtering, and morphology dilation to enhance images. Segmentation techniques covered include thresholding, edge detection, region-based, watershed, Berkeley wavelet transform, K-means clustering, and neural networks. Feature extraction calculates correlation, skewness. Classification algorithms discussed are multi-layer perceptron, naive Bayes, and support vector machines. The document provides an overview of key steps and methods for machine learning-based brain tumor detection and segmentation from MRI images.
3D Segmentation of Brain Tumor ImagingIJAEMSJORNAL
A brain tumor is a collection of anomalous cells that grow in or around the brain. Brain tumors affect the humans badly, it can disrupt proper brain function and be life-threatening. In this project, we have proposed a system to detect, segment, and classify the tumors present in the brain. Once the brain tumor is identified at the very beginning, proper treatments can be done and it may be cured.
IRJET- Brain Tumor Detection using Image Processing and MATLAB ApplicationIRJET Journal
This document presents a proposed method for detecting brain tumors in MRI scans using image processing techniques in MATLAB. It begins with an introduction to brain tumors, MRI, and causes. The proposed method uses anisotropic filtering to reduce noise, SVM classification to segment tumor regions, and morphological operations like dilation and erosion to extract the tumor boundaries. The MATLAB application provides a graphical user interface with tabs for viewing input images, filtered outputs, and detected tumors. Testing on sample images achieved tumor detection times ranging from 152-733 milliseconds depending on image properties. Future work could involve extending the method to 3D images, implementing machine learning for dynamic thresholding, and detecting smaller malignant tumors.
Techniques of Brain Cancer Detection from MRI using Machine LearningIRJET Journal
The document discusses techniques for detecting brain cancer from MRI scans using machine learning. It first provides background on brain tumors and MRI. It then outlines the cancer detection process, including pre-processing the MRI data, segmenting the images, extracting features, and classifying tumors using techniques like CNNs, SVMs, MLP, and Naive Bayes. The document reviews related work applying these techniques and compares their results, finding accuracy can be improved with larger, higher resolution datasets.
Despite the availability of radiology devices in some health care centers, thorax diseases are considered as one of the most common health problems, especially in rural areas. By exploiting the power of the Internet of things and specific platforms to analyze a large volume of medical data, the health of a patient could be improved earlier. In this paper, the proposed model is based on pre-trained ResNet-50 for diagnosing thorax diseases. Chest x-ray images are cropped to extract the rib cage part from the chest radiographs. ResNet-50 was re-train on Chest x-ray14 dataset where a chest radiograph images are inserted into the model to determine if the person is healthy or not. In the case of an unhealthy patient, the model can classify the disease into one of the fourteen chest diseases. The results show the ability of ResNet-50 in achieving impressive performance in classifying thorax diseases.
Glioblastomas brain tumour segmentation based on convolutional neural network...IJECEIAES
Brain tumour segmentation can improve diagnostics efficiency, rise the prediction rate and treatment planning. This will help the doctors and experts in their work. Where many types of brain tumour may be classified easily, the gliomas tumour is challenging to be segmented because of the diffusion between the tumour and the surrounding edema. Another important challenge with this type of brain tumour is that the tumour may grow anywhere in the brain with different shape and size. Brain cancer presents one of the most famous diseases over the world, which encourage the researchers to find a high-throughput system for tumour detection and classification. Several approaches have been proposed to design automatic detection and classification systems. This paper presents an integrated framework to segment the gliomas brain tumour automatically using pixel clustering for the MRI images foreground and background and classify its type based on deep learning mechanism, which is the convolutional neural network. In this work, a novel segmentation and classification system is proposed to detect the tumour cells and classify the brain image if it is healthy or not. After collecting data for healthy and non-healthy brain images, satisfactory results are found and registered using computer vision approaches. This approach can be used as a part of a bigger diagnosis system for breast tumour detection and manipulation.
COVID-19 detection from scarce chest X-Ray image data using few-shot deep lea...Shruti Jadon
In the current COVID-19 pandemic situation, there is an urgent need to screen infected patients quickly and accurately. Using deep learning models trained on chest X-ray images can become an efficient method for screening COVID-19 patients in these situations. Deep learning approaches are already widely used in the medical community. However, they require a large amount of data to be accurate.
details about brain tumor
literature survey on many reference papers related to brain tumor detection using various techniques
our proposed novel methodology for brain tumor detection
Brain Tumor Detection and Classification using Adaptive BoostingIRJET Journal
1. The document describes a system for detecting and classifying brain tumors using MRI images.
2. The system uses techniques like preprocessing, segmentation using k-means clustering, feature extraction with discrete wavelet transform and principal component analysis for dimension reduction, and classification with decision trees and adaptive boosting.
3. Adaptive boosting combines multiple weak learners or decision trees into a strong classifier and focuses on misclassified examples to improve accuracy, achieving 100% accuracy for tumor detection and classification in the system.
The document describes a student project that aims to classify and detect lung disorders from chest x-rays using generative adversarial networks (GANs). It provides an overview of the problem statement, literature review on existing methodologies, and the proposed system architecture. The problem is that early diagnosis of lung diseases is important but current methods are time-consuming. The project aims to develop a hybrid GAN model to overcome issues in existing models and accurately detect lung disorders from x-rays. It will classify diseases like pneumonia and lung cancer.
Pneumonia Detection Using Convolutional Neural Network WritersIRJET Journal
This document describes a study that developed a convolutional neural network (CNN) model to detect and classify pneumonia from chest x-ray images without any training. The model was able to extract relevant features from chest x-ray images and determine if a patient has pneumonia. The CNN model achieved high accuracy for pneumonia classification compared to other advanced methods that rely on transfer learning or manual feature engineering. The study used a dataset of 5,500 chest x-ray images and performed image processing techniques like background removal and cropping before extracting features with the CNN and classifying images.
The document describes using a convolutional neural network with the VGG16 architecture to classify lung cancer CT scan images into 4 classes: large cell carcinoma, squamous cell carcinoma, adenocarcinoma, and normal lungs. The model is trained on a dataset of 1000 CT scan images from Kaggle and achieves an AUC of 0.94, indicating high accuracy in identifying different types of lung cancer. This CNN model with pre-trained VGG16 weights provides an effective approach for classifying lung cancer images and could help enable early diagnosis and treatment of lung cancer.
Class imbalance is a pervasive issue in the field of disease classification from
medical images. It is necessary to balance out the class distribution while training a model. However, in the case of rare medical diseases, images from affected
patients are much harder to come by compared to images from non-affected
patients, resulting in unwanted class imbalance. Various processes of tackling
class imbalance issues have been explored so far, each having its fair share of
drawbacks. In this research, we propose an outlier detection based image classification technique which can handle even the most extreme case of class imbalance. We have utilized a dataset of malaria parasitized and uninfected cells. An
autoencoder model titled AnoMalNet is trained with only the uninfected cell images at the beginning and then used to classify both the affected and non-affected
cell images by thresholding a loss value. We have achieved an accuracy, precision, recall, and F1 score of 98.49%, 97.07%, 100%, and 98.52% respectively,
performing better than large deep learning models and other published works.
As our proposed approach can provide competitive results without needing the
disease-positive samples during training, it should prove to be useful in binary
disease classification on imbalanced datasets.
Performance Comparison Analysis for Medical Images Using Deep Learning Approa...IRJET Journal
This document discusses and compares several deep learning approaches for analyzing medical images, specifically chest x-rays. It first provides an abstract that outlines comparing existing technologies for analyzing chest x-rays using deep learning. It then reviews literature on models like convolutional neural networks (CNN), fully convolutional networks (FCN), lookup-based convolutional neural networks (LCNN), and deep cascade of convolutional neural networks (DCCNN) that have been applied to tasks like image segmentation, classification, and quality assessment of medical images. The document compares the performance of these models on different medical image datasets based on accuracy metrics.
Pneumonia Detection Using Deep Learning and Transfer LearningIRJET Journal
This document presents research on using deep learning and transfer learning techniques to detect pneumonia from chest x-ray images. The researchers trained several models, including CNNs, DenseNet, VGG-16, ResNet, and InceptionNet on a dataset of chest x-rays labeled as normal or pneumonia. The models achieved accuracy in detecting pneumonia of 89.6-97%, depending on the specific model. The researchers found that deep learning approaches like these have significant potential to improve the accuracy and efficiency of pneumonia diagnosis compared to traditional methods. Overall, the study demonstrated promising results for using machine and deep learning to classify medical images and detect health conditions like pneumonia.
APPLICATION OF CNN MODEL ON MEDICAL IMAGEIRJET Journal
The document discusses using convolutional neural network (CNN) models to detect diseases from medical images such as chest X-rays. It describes how CNN models can be trained on large labeled datasets of chest X-rays to learn patterns and features that indicate diseases. The document then evaluates several CNN architectures - including VGG-16, ResNet, DenseNet, and InceptionNet - for classifying chest X-rays as normal or infected. It finds these models achieve high accuracy, with metrics like accuracy over 89% and AUC over 0.94. In conclusion, deep learning models show promising results for automated disease detection from medical images.
Cervical Cancer Detection: An Enhanced Approach through Transfer Learning and...IRJET Journal
This document presents research on using the DenseNet169 deep learning model for cervical cancer detection. The researcher trained and tested the model on a large cervical cell image dataset from Kaggle. Through data preprocessing like augmentation and normalization, and transfer learning by fine-tuning a DenseNet pre-trained on ImageNet, the model achieved 95.27% accuracy in classifying five cervical cell types. Evaluation of the model showed high average precision, recall, and F1-score, demonstrating its ability to correctly classify different cervical cell images. The research highlights the potential of deep learning models for automating cervical cancer screening and improving early detection.
Enhancing Pneumonia Detection: A Comparative Study of CNN, DenseNet201, and V...IRJET Journal
This document presents a comparative study evaluating the performance of three deep learning models - a custom CNN, DenseNet201, and VGG16 - in classifying chest X-ray images to detect pneumonia. The CNN model achieved the best performance with 80% accuracy, comparable to human radiologists. A chest X-ray dataset was used to train and evaluate the models. VGG16 consistently outperformed the other models, though all models showed potential for improving pneumonia diagnosis through rapid and accurate analysis of medical images using deep learning techniques.
Pneumonia Classification using Transfer LearningTushar Dalvi
Pneumonia can be life-threatening for people with weak immune systems, in which the alveoli filled with fluid that makes it hard to pass oxygen throughout the bloodstream. Detecting pneumonia is from a chest X-ray is not only expansive but also time-consuming for normal people. Throughout this research introduced a machine learning technique to classify pneumonia from Chest X-ray Images. Most of the medical datasets having class imbalance issues in the dataset. The Data augmentation technique used to reduce the class imbalance from the dataset, Horizontal Flip, width shift and height shift techniques used to complete the augmentation technique. Used VGG19 as a base architecture and ImageNet weights added for the transfer learning approach, also Removing initial layers and adding
some more dense layers helped to discover new possibilities. After testing the proposed model on testing data, we are able to achieve 98% recall and 82% of precision. As compare with state of the art technique, the proposed method able to achieve high
recall but that compromises with Precision.
This document presents a model to detect and classify brain tumors using watershed algorithm for image segmentation and convolutional neural networks (CNN). The model takes MRI images as input, pre-processes the images by converting them to grayscale and removing noise, then uses watershed algorithm for image segmentation and CNN for tumor classification. The CNN architecture achieves classification of three tumor types. Previous related works that also used deep learning methods for brain tumor detection and classification are discussed. The proposed system methodology involves inputting MRI images, pre-processing, segmentation using watershed algorithm, and classification of tumorous vs non-tumorous cells using CNN.
The current big challenge facing radiologists in healthcare is the automatic detection and classification of masses in breast mammogram images. In the last few years, many researchers have proposed various solutions to this problem. These solutions are effectively dependent and work on annotated breast image data. But these solutions fail when applied to unlabeled and non-annotated breast image data. Therefore, this paper provides the solution to this problem with the help of a neural network that considers any kind of unlabeled data for its procedure. In this solution, the algorithm automatically extracts tumors in images using a segmentation approach, and after that, the features of the tumor are extracted for further processing. This approach used a double thresholding-based segmentation technique to obtain a perfect location of the tumor region, which was not possible in existing techniques in the literature. The experimental results also show that the proposed algorithm provides better accuracy compared to the accuracy of existing algorithms in the literature.
Breast cancer classification with histopathological image based on machine le...IJECEIAES
This document summarizes a study that used pre-trained convolutional neural network (CNN) models to classify breast cancer histopathology images as benign or malignant. Five CNN architectures - ResNet-50, VGG-19, Inception-V3, AlexNet, and ResNet-50 as a feature extractor combined with random forest and k-nearest neighbors classifiers - were evaluated on the publicly available BreakHis dataset. The ResNet-50 network achieved the highest test accuracy of 97% for classifying the breast cancer images.
This document summarizes a study that developed an attention-based deep learning model to detect diabetic retinopathy from fundus images. The proposed model used an InceptionV3 architecture with additional convolutional layers and an attention mechanism. On a dataset of 35,000 fundus images, the model achieved 94.3% validation accuracy, an improvement over previous models. Activation heatmaps showed the model learned important retinal features. The study demonstrated that deep learning can effectively detect diabetic retinopathy from images and may help with early diagnosis.
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
The document describes a proposed method for designing a classifier to detect diabetes using neural networks and the fuzzy k-nearest neighbor algorithm. The method would train a neural network using the fuzzy k-NN algorithm on a server and use it to classify diabetes on a mobile device for convenience. Analysis in WEKA showed the method achieved around 72-74% accuracy on 10-fold cross validation of a diabetes dataset with attributes removed. The proposed method is expected to perform comparably to support vector machines with less complexity.
Social media marketing (SMM) is a form of digital marketing that utilizes social media platforms to promote products, services, or brands. The goal of social media marketing is to connect with the target audience, build brand awareness, increase website traffic, and drive engagement and conversions. Here are some key aspects of social media marketing:
Strategy Development:
Identify your target audience: Understand the demographics, interests, and online behavior of your target audience.
Set clear goals: Define specific, measurable, achievable, relevant, and time-bound (SMART) goals for your social media campaigns.
Choose the right platforms: Select social media platforms that align with your target audience and business objectives.
Content Creation:
Create engaging content: Develop content that resonates with your audience, such as images, videos, infographics, and text posts.
Maintain consistency: Establish a consistent posting schedule to keep your audience engaged and informed.
Use a variety of content types: Experiment with different content formats to keep your social media presence diverse and interesting.
Audience Engagement:
Respond to comments and messages: Engage with your audience by responding to comments, messages, and mentions in a timely manner.
Encourage user-generated content: Encourage your followers to create and share content related to your brand.
Run contests and giveaways: Organize contests or giveaways to boost engagement and attract new followers.
Paid Advertising:
Utilize paid social media advertising: Platforms like Facebook, Instagram, Twitter, and LinkedIn offer advertising options to reach a larger audience.
Targeted advertising: Use advanced targeting options to reach specific demographics, interests, and behaviors.
Analytics and Monitoring:
Use analytics tools: Monitor the performance of your social media campaigns using analytics tools provided by the platforms or third-party tools.
Adjust strategies based on data: Analyze the data and adjust your strategies to optimize performance and achieve better results.
Influencer Marketing:
Collaborate with influencers: Partner with influencers who align with your brand to reach a wider audience and build credibility.
Leverage user trust: Influencers can help establish trust with their followers, leading to increased brand credibility.
Social Media Trends:
Stay updated: Keep track of emerging trends in social media marketing and adapt your strategies accordingly.
Experiment with new features: Platforms regularly introduce new features; experiment with these features to stay ahead of the curve.
Remember that effective social media marketing requires a consistent and strategic approach. Regularly assess your performance, listen to your audience, and adjust your strategies to meet your goals.
This document describes a proposed method for classifying chest x-ray images to diagnose lung infections using convolutional neural networks (CNNs). The objectives are to examine if transfer learning from different source domains can improve performance for classifying healthy, pneumonia and COVID-19 cases using a small dataset. The proposed methodology includes collecting datasets, training a CNN model using transfer learning, evaluating performance using a confusion matrix, and identifying opportunities for future enhancement like exploring different network architectures and domains.
Prediction of Corporate Bankruptcy using Machine Learning Techniques Shantanu Deshpande
Aim is to build a classification model to predict whether company will become bankrupt or not using financial ratios of Polish companies. Applied various machine learning models like Random Forest, KNN, AdaBoost & Decision Tree with pre-processing techniques like SMOTE-ENN (to deal with class imbalance) & feature selection (for identifying ) and trained on Polish Bankruptcy dataset with prediction accuracy of 89%.
Corporate bankruptcy prediction using Deep learning techniquesShantanu Deshpande
This document proposes using deep learning techniques like LSTM neural networks to predict corporate bankruptcy by integrating both financial ratio data and textual disclosures from annual reports. It notes that previous studies have largely relied on statistical models or used only financial data with machine learning. The researcher aims to determine if adding textual data to an LSTM model improves prediction performance over a CNN model using only financial ratios. The document outlines the research question, objectives, and provides an overview of previous bankruptcy prediction studies using statistical, machine learning and deep learning methods.
Analyzing financial behavior of a person based on financial literacyShantanu Deshpande
6. Analysed consumer behaviour and relationship using Financial literacy dataset. Identified patterns and predictor variables using logistic regression.
Methodologies & Tools: IBM SPSS, RapidMiner, PowerBI
4. Performed statistical analysis on a chosen data table and understood relationship amongst different data fields using IBM SPSS software.
Methodologies: Multi linear regression, Logistic linear regression
IBM SPSS
2. Developed a strategic management information system for a virtual organization while considering the analytical requirements for management dashboards.
Tools: Salesforce Developer platform
Built a data warehouse from multiple data sources and ETL methodologies and executed three non-trivial Business Intelligence queries.
Technologies/Tools: R, SQL, Visual Studio, SQL Server Management, Tableau
I have examined the performance of two databases - HBase and Cassandra in terms of their scalability, security, performance and compared the results thus obtained through different operations on the Ubuntu interface.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAU
Pneumonia detection using CNN
1. Diagnosing Pneumonia from Chest X-Rays Using Neural Networks
Tushar Dalvi Shantanu Deshpande Yash Iyangar Ashish Soni
x18134301 x18125514 x18124739 x18136664
Abstract—Disease diagnosis with radiology is a common prac-
tice in the medical domain but requires doctors to correctly
interpret the results from the images. Over the years due to
the increase in the number of patients and the low availability of
doctors, there was a need for a new method to identify and detect
disease in a patient. In recent years machine learning techniques
have been very effective in image-based disease diagnosis. In
this paper, a method has been proposed to diagnose Pneumonia
using transfer learning with VGG19 model. For pre-processing
online image, augmentation has been used to reduce the class
imbalance in the training data, With this methodology. For pre-
training the VGG19 model, Imagenet weights have been used
and three layers have been stripped off the model and 2 dense
layers have been added with Relu as an activation function in
one layer and softmax in the other layer. We were able to achieve
a recall of 0.96 and a precision of 0.87 with such a setting.
Index Terms—Pneumonia detection, CNN, Image Classification
I. INTRODUCTION
Radiology is one of the many branches of medicine that
basically focuses on making use of medical images for the
detection, diagnosis and characterization of the disease. One of
the main responsibilities of a Radiologist is to view a medical
image and based on the image producing a written report of
the observed findings [1]. Medical experts have made use of
this technique for around several years in order to visualize as
well as explore abnormalities and fractures in the body organs.
In recent years, due to the advancement in the technology
field, we have seen an up-gradation in the healthcare systems.
This has helped the doctors in better diagnosis accuracy and
reducing the fatality rate. The chest is the most important
part of the body as it contains the respiration organs which
are responsible for sustaining important life function of the
body. The count of people being diagnosed with a chest
disease globally is in millions. Some of the diseases that are
diagnosed using chest x-ray images are lung cancer, heart
diseases, Pneumonia, bronchitis, fractures etc.
Pneumonia is one such kind of chest disease wherein there
is an inflammation or infection in the lungs and it is most
commonly caused by a virus or a bacteria. Another cause due
to which a person can acquire Pneumonia is through inhaling
vomit or other foreign substances. If a person is diagnosed
with this disease, the lungs air sacs get filled with mucus,
pus, and few other liquids which do not allow the lungs to
function properly. This causes an obstruction in the path of
oxygen to reach the blood and the cells of the body in an
effective manner. Based on the data provided by the World
Health Organization (WHO), 2.4 million persons die every
year due to Pneumonia. [2]
One of the most tedious tasks for the radiologists is the
classification of X-ray abnormalities to detect and diagnose
chest diseases. As a result of this, several algorithms had
been proposed by researchers to perform this task accurately.
In these couple of decades, computer-aided diagnosis (CAD)
has been developed through various studies to interpret useful
information and assist doctors in getting a meaningful insight
about an X-ray. Several research works were carried out in
past by using artificial intelligence methodologies. Based on
the learnings from artificial intelligence methodologies, we can
also develop an image processing system that will use the X-
ray images and will detect the diseases in the primary stages
with improved accuracy. Some of the neural network method-
ologies that have been effectively used by the researchers in
replacing the traditional pattern recognition methods for the
diagnosis of diseases are probabilistic neural network (PNN),
multi-layer neural network (MLNN), generalized regression
neural network (GRNN), learning vector quantization (LVQ),
and radial basis function. These deep neural networks have
showcased increased accuracy in terms of image classification
and have thereby motivated the researchers to explore more
artificial intelligent techniques.
The convolutional Neural Network is another frequently
used deep learning architecture due to its power of extraction
of various different level features from the images. [3] By go-
ing through the relevant research studies, we hereby propose a
deep convolutional neural network (CNN) in order to improve
the accuracy of the diagnosis of chest disease, particularly
Pneumonia.
II. RESEARCH QUESTION
Can the Recall be improved for Chest X-ray Pneumonia
Detection using CNN based VGG19 model as compared to
the state of the art technique?
III. LITERATURE REVIEW
Owing to the difficulty in manually classifying the X-ray
images for a disease, various studies have evolved around the
use of Computer-aided diagnostics for obtaining meaningful
X-ray information and thereby assist physicians to gain a
quantitative insight of the disease. In this section, we will
review the pieces of literature that supports the current research
work.
In [4], the author examined three methods, Backpropagation
NN, Competitive NN, Convolutional NN. In BPNN, the errors
at output layers are propagated back to the network layer
whereas CpNN is a two-layer NN based on a supervised
algorithm. CNN has the strongest network as it can include
many hidden layers and performs convolutions and subsam-
pling in order to obtain low to a high level of features
from the input data. Also, it has three layers; convolutional
2. layer, subsampling/pooling layer, full connection layer. Results
showed that CNN took more time and number of learning
iterations however achieved the highest recognition rate and
was also able to get a better generalization power over BPNN
and CpNN.
In the above study, the output in terms of precision, com-
putation time was not as efficient due to no prior image
pre-processing performed. In [5], histogram equalization is
performed for image pre-processing and the images are divided
as well as normalized, further, cross-validation is conducted on
test data. The proposed model achieved an accuracy of 95.3%
on test data.
Further to the outstanding performance of CNN in [4],
another study [6] was conducted that evaluated performance
of three CNN architectures, Sequential CNN, Residual CNN
and Inception CNN. In the residual, loss of information is
prevented by passing information from earlier network layers
downstream thus solving the problem of representational bot-
tlenecks. Six residual blocks have been used and performance
measured based on following metrics, accuracy, precision,
AUC, specificity, recall. Non-linearity is introduced by using
a Rectified Linear Unit (ReLU) layer. The VGC16 model out-
performed all the other models. In [7], like [5], data has been
normalized by dividing all pixels by 255 and thus transformed
them to floating points. CNN model has compared with MLP
wherein each node except the input node is a neuron that
uses non-linear activation function. The technique involved
multiplying inputs by weight and then add them to bias and
dropout technique has been used to avoid overfitting and
reduce the training time. CNN outperformed MLP based on
evaluation metrics like cross-validation and confusion matrix.
A different approach can be seen in [8] where the network
layer utilizes the final layer as the convolutional implementa-
tion of a fully connected layer that allows a 40-fold speedup.
For dealing with imbalanced label distributions, a two-phase
training procedure has been proposed. Performance of three
different optimizers has been compared in [9], Adam opti-
mizer, momentum optimizer and stochastic gradient descent.
Filters are applied to the input image and the matrices thus
formed are known as feature maps. Filters of 5x5 size used
to extract features. In order to prevent overfitting, half of
the neurons are disabled using the dropout technique. Adams
optimizer outperformed the other two optimizers. Utilization
of ResNet-50 for embedding the features has been done in
[10] and CNN model thus built is able to get overall AUC of
0.816.
A different and novel approach is visible in [11] where
the author used Transfer Learning to train the model. The
advantage of TL is that a model can be trained even with
limited data. A dataset of 5232 images has been used and
the model achieved accuracy of 92.8% with sensitivity and
specificity of 93.2% and 90.1% respectively. In [12], the pre-
processing steps involved increasing the images and rotating
them by 40 degrees. By following a similar model building
process as [11], the author obtained the accuracy of 95.31%
and is better than [11].
Two different CNN techniques were used in [13] and [14],
the GoogleNet and AlexNet. During the pre-processing stage,
multiple rotations at 90, 180 and 270 degrees have been
performed and able to get AUC of 0.99. A rather different
approach than CNN can be seen in [15] where the author
used a technique called Multiple-instance learning (MIL). The
research overcomes some of the drawbacks like no proper
estimation of positive instances by reducing the number of
iterations. The slow configuration is avoided by avoiding the
reclassification and the AUC achieved is 0.86%. In [16],
the author tackled the issue of class imbalance using image
augmentation methods like rotating and flipping the images.
In the last dense layer, the sigmoid function is used to avoid
the problem of overfitting. Like [9], Adam optimizer has been
used to reduce losses and batch size is set to 64 whereas epoch
set to 100. The recall obtained is 96.7% and is better than state
of art techniques. A 121 layer convolutional neural network
has been trained on a large dataset of 100,000 X-Ray images
with 14 diseases in [17]. Weights initialized using pre-trained
weights from ImageNet and the model is trained using Adam.
An extension of the algorithm is undertaken to detect multiple
diseases and it is found that this model outperforms the state
of the art.
A deep neural network algorithm, Mask-RCNN is used
in [18]. This method uses lung opacity as a feature for
binary classification and Pneumonia detection. The final output
obtained is an ensemble of two models. The base network
is trained with COCO weights based on ResNet50 and
ResNet101 models to extract features from the images. An
improved result due to an ensemble approach is 0.21. In [19],
for extracting representative and discriminative features from
the Chest X-ray images for the effective classification into
various body parts, CNN’s were explored and the capability
of CNN in terms of capturing the image structure from feature
maps is portrayed and hand-engineered features were easily
outperformed by CNNs.
The usefulness of transfer learning [11] is further stated
in [20] where the authors used a pre-trained ChexNet model
with 121-dense layers and applied it in 4 blocks. The output
of pretrained model is transferred to the new model and Adam
optimizer is used. The model was trained on multiple blocks
and multiple layers and best validation accuracy obtained was
90.38% for the model with 6 layers. The use of demographic
features like age, weight and height is used in addition to
image features in [21] and the balanced dataset is used for
training. Five different algorithms have been used viz. In-
ceptionResNetV2, DenseNet121, InceptionV3, ResNet50 and
VGG19. The batch size used was 16 and Stochastic Gradient
Descent was chosen as optimizer function. VGG19 model
outperformed other models with AUC 0.9714 for training and
0.9213 for testing data.
Like [20] and [11], transfer learning is used in [22], how-
ever, in this study due to the high resolution of CXR images,
an extra convolution layer is added. Proposed system showed
enhanced screening performance. A multi-CNN model is used
in [23] which consists of three CNN components. The output
3. of each CNN component is a probability which classifies
the image as normal or abnormal. The outcome of all three
components is calculated by using the association fusion rule.
The fusion rule consists of eight cases which lead to the final
classification of the image. The model has been trained using
the association fusion rule for classification of the images.
Their model has achieved an accuracy of 96%.
IV. METHODOLOGY
From above Literature Review, we can say that in the medi-
cal domain it is hard for a layman to identify a disease by just
looking at the images of the body part. To identify disease from
x-rays, MRI Scans well-trained doctors or properly trained
computer-based system are required because a layman doesn’t
possess the knowledge of the disease like the doctor. Even
though we are building a model to detect Pneumonia we would
want our model to detect different diseases with an adequate
amount of training examples and model weights. So we will be
using the CRISP-DM methodology [24]. It stands for Cross
Industry Standard Process for Data Mining. It has six steps
out of which we will be using Business Understanding, Data
Understanding, Data Preparation, Modeling and Evaluation.
We won’t be deploying our model for real-world use as it is
a model developed for this project and further methodologies
need to be tested.
1) Setup: In this study all the data is image formatted, so
we need to process the images to identify the patterns and
based on that patterns need to classify the different classes.
To perform all these tasks, we need a proper hardware system
which can handle image processing task without system fail-
ure. Also, the requirement of proper GPU is must to train the
model within minimum time. Python Programming Language
has been used for this research, but due to lack of hardware
processing power, normal laptops or systems might be not able
to handle the model building process smoothly. To overcome
this issue, for this study we are using cloud service-based
platform named Google Collaboratory provided by Google. It
is a free Jupyter notebook environment that requires no prior
setup and runs entirely on cloud [25]. In this environment 12
GB of ram and 50 GB of storage space is provided initially
as freemium service, also Google provides Free Tesla K80
Graphical processor of about 12 GB for fast computing. To
perform this study data is uploaded on the Google drive
for maximum time availability. And all the task which are
mentioned in the next section is performed on Google Colab
notebook.
V. DATA UNDERSTANDING AND PRE-PROCESSING
While training the neural network, we face a common
problem that not enough data is available to train the model
to maximize the capability of the neural network [26]. In
the medical domain data sets having more class imbalance
as compared to a dataset of other domains. The data which
we have gathered for our study is collected from Kaggle and is
already divided into training, validation and testing sets. So we
don’t have to create the partition for the data. As mentioned
above the data set is highly imbalanced, training data contains
a total of 5216 images out of which 1341 are images of normal
lungs and 3875 are images of Pneumonia infected lungs, which
means for every image of normal lungs we have 3 images of
Pneumonia infected lungs.
Fig. 1. Normal Vs Pneumonia Cases.
In such cases, the evaluation cannot rely only on metrics
like Accuracy. There are many techniques available to reduce
the recurring problem of class imbalance in the data which
includes random undersampling, random oversampling and
Synthetic Minority Over-sampling Technique(SMOTE). But
there is a flaw in using these methods as there is a high
probability of information loss as we have to detect Pneumonia
and ignoring any Pneumonia case from the dataset will lead
to loss of information for training purpose. Hence in data
pre-processing Data augmentation is a technique used to deal
with the class imbalance in the data without the need to
collect new data. In this method the images of the class with
fewer numbers in the data are duplicated by flipping, rotating,
padding the available images of that class. There are two types
of augmentation techniques one is offline and another is online,
offline create a new dataset in the mentioned directory and it
is time-consuming. But in Online Augmentation with the help
of Keras, we can create a batch of images and that image gets
duplicated and used as input, also online augmentation doesnt
need that much space and it consumes less computational
power. For code reproducibility we have set the initial seed to
100 with the function random.seet(100). In augmentation, we
have performed rescaling, rotation, width shift, height shift,
and horizontal flip using the ImageDataGenerator function
from the Keras package [27]. In Rescaling, image is divided by
255 and converted into floating value. Also, images are rotated
by 20 degrees. Width and height is shifted by 0.2 fractions
of the total width, and with the horizontal flip parameter
set to true, the images are flipped horizontally. The images
in the dataset had varying size in height and width, which
was not proper to provide as an input to the model, hence
all images were downsized to 128 x 128 pixel by using
the flow from directory function and also the shuffle flag
parameter was set to true, so while training phase model
4. should pick the random image form the data. Providing all
images at the same time will take a lot of computational power
to analyze images hence we set the batch size to 16 images per
batch and set the class mode as categorical. With the help of
the above steps, we were able to reduce the class imbalance in
the dataset. For testing and validation data set only rescaling
was performed during augmentation and also batch size was
set to 16 images.
VI. MODELING
As this study aims to diagnose Pneumonia from Chest
X-Ray images, a supervised learning technique is suitable
for further study. Previous studies like [16] and [6] used
Convolutional Neural Network and got promising results.
A study conducted by [21] is used as our base paper to
continue our research. As mention earlier in section IV-1
we are using Python programming language to complete this
study, the main reason to use python is, it has large amount
of libraries like NumPy, Matplotlib, and TensorFlow used to
carry out various task, and with cloud environment like Google
Collaboratory, and Google drive helps to keep personal system
available for another task. To build a model it requires suitable
computational power, with reference to [21] study, we are
using VGG19 model for further process. Here VGG stands
for the Visual Geometry group from the Oxford University
who developed the VGG16 model for an image recognition
competition and won it. The number ’16’ and ’19’ stands for
the number of layers in the CNN model. This CNN model
was developed and trained on multiple images for image
classification. It consists of 3x3 layers of convolution stacked
onto each other with the activation function as Rectifier Linear
Unit (ReLU), to reduce the volumetric size max-pooling layers
are used and are followed by fully connected layers again
with ReLU as their activation function. It can classify images
from almost one thousand different categories and can take
an input image with dimensions 224x224 pixels. The model’s
application in this project is explained in the next section.
1) Model Building: Building a model or selecting a model
is one of the crucial tasks in data mining techniques. Choosing
the right model not only enhance the result but also reduce the
time required to train the model. In this study, we use transfer
learning to build out CNN model, in transfer learning a pre-
built model store the knowledge which is gained while solving
one problem and use and apply this knowledge on another
related program [28]. In our study, we use MobileNetV2
as a base model for transfer learning. MobileNets improve
the performance of mobile models on multiple tasks. We
have created a custom model with the help of VGG19.
As mentioned above it is a renowned Convolutional Neural
Network Architecture for object recognition task developed
and trained by Oxford’s renowned Visual Geometry Group
[29]. In this study VGG19() function from the Keras package
imports VGG19 model. Here we use a novel approach by
excluding the top 3 layers of the model, add two dense layers
and we provide the input images with the dimensions of 128
x 128 x 3. Also, pretraining is performed on the Imagenet
weights by allocating Imagenet attribute to the weights. Lastly,
we set False flag for trainable attribute so while training the
model no new weight will be updated. After importing VGG19
model we call the GlobalAveragePooling2D() function which
performs global average pooling and calculates the average
map of features from the previous CNN layer to reduce the
effect of overfitting. Two dense layers are added to the model
with activation function as ReLU and softmax respectively.
The reason behind using ReLU activation function is, it is
non-linear and doesn’t produce back propagation errors like
the Sigmoid function. It also speeds the model building process
for a large Neural Networks. Softmax activation function helps
to get output in 0 to 1 range, therefore the output from softmax
falls under the probability distribution function and helps in
classifying multiple classes as it outputs the probability for
each class between 0 and 1. Also, while using a dense layer
512 units have been provided for the output space [30]. The
Dropout rate is set to 0.7 to reduce the effect of overfitting.
2) Model Training: In the previous section, we build a
custom model by adding 2 dense layers in the imported
VGG19 model. But the model is configured for better per-
formance and to minimize the loss function with the use
of the compile function. While configuring compile function
we need to use optimize our model to reduce training loss.
In this study, Adam optimizer has been used because it
computes the individual learning rates for different parameter.
Secondly, for the loss function, categorical crossentropy is
used. categorical crossentropy is a loss function that is used
for categorization of single label that means the single example
from our dataset should only belong to a single class. Hence
our model will not classify the both Normal and Pneumonia
class for a single image and for the metrics we have used
accuracy for evaluation of the model on the training data.
The model training phase is the most time-consuming phase
in the entire data mining process. This phase takes time to
perform depending on the size of the dataset and the number
of iteration. To configure this process fit generator() function
from Keras is used to fit the entire training data into the
memory, by setting the Epochs to 20 and verbose to 1.
The main task of the fit generator function is to discard the
previous data which was already used for training purpose
and create a new one for the next process. This task repeats
the same process with the number of epochs that have been
provided. Model training takes a lot of computing cost and
its a heavy process when it comes to performing image-based
data classification. Hence we run this process on Google collab
with 12 GB of Ram and with Tesla K80 graphical processor
with 12 GB graphical memory [31]. Even with this hardware
configuration, it took around 9 hrs to complete 20 Epochs.
While performing this training we store the log of each epoch
to evaluate the performance of the model which is explained
in the next section.
VII. EVALUATION
In the previous section, we created our model and trained it
on the training dataset, during the training period, logs were
5. recorded to check the model performance. In this section, we
will check the model performance from those logs. Evaluation
technique helps to compare the generated result and expected
result. Initially, with the help of logs, we will evaluate the
accuracy and loss while training the model.
As we can see in the figure 4 loss on the Validation data
is varying in all the epochs and gradually increasing, at the
very first epoch loss reduces from 0.5 to 0.4 approximately.
Till it reaches the 19th epoch it increases to 0.8 with lots of
variations. In the last epoch loss on the validation data falls to
0.64. The loss of the training data constantly decreases. This
contrast in the loss plot obtained by running the model with
20 epochs on both training and validation data suggests that
the model is not overfitting the training data.
Fig. 2. Loss Plot
Accuracy plot tells us about how many images of both
classes are being correctly predicted by the model. It increases
for the training data gradually over the period of 20 epochs to
approximately 0.92, but in case of validation dataset accuracy
sharply increases from 0.81 to 0.875 and sharply decreases
to .81 again and remains constant till 12 epochs and then
again sharply varies till the last epoch. Even over here we
can observe that both the training accuracy and validation
accuracy do not flow in the same direction that is they do not
increase constantly and together, this means that the model is
not overfitting the training data’s information.
The model’s performance is evaluated over the test data
using the evaluate generator() function from the Keras pack-
age. This function checks accuracy on the testing dataset,
in which maximum generator queue is set to 10 and set
use multiprocessing to true. As in result, we can see that
loss on the test dataset is around 33% and accuracy is
approximately 89%. But evaluate generator function initially
Fig. 3. Accuracy Plot
predict the output using training data and then evaluate the
performance by comparing against the test dataset which we
called as accuracy measure.
As mentioned in section V our dataset is from Health care
domain and to calculate the proper prediction we need to
consider recall and precision rather than accuracy. To calculate
the recall and precision we need to provide the dataset which is
not yet introduced to our model. For that, we need to perform
prediction on the testing dataset. But before that, we need to
identify the classes from test dataset to compare the result with
our prediction. With the use of load model() function from the
Keras package, the saved model is loaded. And with the help of
CV2 library, all images from testing dataset have been resized
to 128 x 128 pixel to reduce the computational cost. Also
with the help of CV2 library, each image has been labelled
with their respective class which is 1 for Pneumonia infected
lungs image and 0 for normal lungs image. This step helps
to compare the actual image’s class with predicted image’s
classes. To initiate the prediction process predict() function
has been used. The model’s performance on testing data is
tested and a batch size of 16 and with the argmax function
model returns the indices of the maximum values along the
rows as it is set to 1. To plot the result confusion matrix is used
with matplotlib library and precision and recall are calculated
from the output which is discussed in the Result section.
VIII. RESULTS
The performance of our classification model is described
through a table known as the confusion matrix, as given below:
The basic terms to be interpreted from our confusion matrix
are:
True Positives (TP) : The number cases where the model
predicted yes and the person do have Pneumonia, i.e. 376.
6. Fig. 4. Confusion Matrix
True Negatives (TN) : the number of cases where the model
predicted no and the person does not have Pneumonia. i.e.
177. False positives (FP) : Model predicted yes, but actually
they do not have Pneumonia. i.e. 57. False Negatives (FN) :
Model predicted no, but actually they do have Pneumonia. i.e.
34.
How often our model is correct when it predicts yes, is
given by precision and it is calculated as:
Precision = TP / (TP + FP) i.e. 0.87
Also, for our study, we will be focusing on Recall as a
metric and expect it to be high. Recall would be a good
measure of fit for our case as our goal is to correctly classify
all the Pneumonia cases to save the patient’s life. How many
images the model has classified correctly out of all the positive
classes is given by Recall and it should be as high as possible.
Recall “
TP
pTP ` FNq
As for our case, achieved Recall is 0.96. A high recall is
expected but whenever there is a high recall, the precision
reduces. This means an increase in the number of false
positive. This means the model is wrongly classifying normal
lung images as Pneumonia infected lung images. Thus there
is a trade-off between recall and precision which needs to be
maintained and depends on the domain and industry standards.
IX. CONCLUSION AND FUTURE WORK
In this study, a methodology has been proposed to diagnose
Pneumonia from chest x-ray images. The dataset selected
had a high number of Pneumonia images and comparative
less number of normal images. The imbalance ratio was
3 is to 1, so the image augmentation procedure was used
to reduce the class imbalance. Due to time constraints and
limited availability of resources other pre-processing methods
like pixel brightness transformation and image segmentation
could not be performed we suggest these techniques to be
implemented in the future. The VGG19 model was used and
three layers of the model were removed and two dense layers
were added. Building a CNN model from scratch requires high
computational capacity and as there was a shortage of time,
we would suggest building a CNN model from scratch and
fine-tuning the parameters for the future works. We were able
to achieve a decent output with 0.96 recall, with a different
approach, the result can be improved.
REFERENCES
1 Kohli, M., Prevedello, L. M., Filice, R. W., and Geis, J. R., “Implementing
machine learning in radiology practice and research,” American Journal
of Roentgenology, vol. 208, no. 4, pp. 754–760, 2017.
2 Er, O., Yumusak, N., and Temurtas, F., “Chest diseases diagnosis
using artificial neural networks,” Expert Systems with Applications,
vol. 37, no. 12, pp. 7648–7655, 2010. [Online]. Available: http:
//dx.doi.org/10.1016/j.eswa.2010.04.078
3 Bengio, Y., Lamblin, P., Popovici, D., and Larochelle, H., “Greedy
layer-wise training of deep networks,” in Advances in Neural Information
Processing Systems 19, Sch¨olkopf, B., Platt, J. C., and Hoffman, T., Eds.
MIT Press, 2007, pp. 153–160. [Online]. Available: http://papers.nips.cc/
paper/3048-greedy-layer-wise-training-of-deep-networks.pdf
4 Tom`e, D., Monti, F., Baroffio, L., Bondi, L., Tagliasacchi, M.,
and Tubaro, S., “Deep convolutional neural networks for pedestrian
detection,” vol. 2018, 2015. [Online]. Available: http://arxiv.org/abs/1510.
03608tz%u0Ahttp://dx.doi.org/10.1016/j.image.2016.05.007
5 Saraiva, A., Ferreira, N., Lopes de Sousa, L., Costa, N., Sousa, J.,
Santos, D., Valente, A., and Soares, S., “Classication of Images of
Childhood Pneumonia using Convolutional Neural Networks,” pp. 112–
119, 2019.
6 Rajaraman, S., Candemir, S., Kim, I., Thoma, G., and Antani, S., “Visual-
ization and Interpretation of Convolutional Neural Network Predictions in
Detecting Pneumonia in Pediatric Chest Radiographs,” Applied Sciences,
vol. 8, no. 10, p. 1715, 2018.
7 Hand, D. J., “Measuring classifier performance: A coherent alternative to
the area under the ROC curve,” Machine Learning, vol. 77, no. 1, pp.
103–123, 2009.
8 Havaei, M., Davy, A., Warde-Farley, D., Biard, A., Courville, A.,
Bengio, Y., Pal, C., Jodoin, P. M., and Larochelle, H., “Brain tumor
segmentation with Deep Neural Networks,” Medical Image Analysis,
vol. 35, pp. 18–31, 2017. [Online]. Available: http://dx.doi.org/10.1016/j.
media.2016.05.004
9 Hooda, R., Sofat, S., Kaur, S., Mittal, A., and Meriaudeau, F., “Deep-
learning: A potential method for tuberculosis detection using chest ra-
diography,” Proceedings of the 2017 IEEE International Conference on
Signal and Image Processing Applications, ICSIPA 2017, pp. 497–502,
2017.
10 Guan, Q. and Huang, Y., “Multi-label chest X-ray image classification via
category-wise residual attention learning,” Pattern Recognition Letters,
no. xxxx, 2018. [Online]. Available: https://doi.org/10.1016/j.patrec.2018.
10.027
11 Kermany, D. S., Goldbaum, M., Cai, W., Valentim, C. C., Liang, H.,
Baxter, S. L., McKeown, A., Yang, G., Wu, X., Yan, F., Dong, J.,
Prasadha, M. K., Pei, J., Ting, M., Zhu, J., Li, C., Hewett, S., Dong, J.,
Ziyar, I., Shi, A., Zhang, R., Zheng, L., Hou, R., Shi, W., Fu, X., Duan, Y.,
Huu, V. A., Wen, C., Zhang, E. D., Zhang, C. L., Li, O., Wang, X.,
Singer, M. A., Sun, X., Xu, J., Tafreshi, A., Lewis, M. A., Xia, H., and
Zhang, K., “Identifying Medical Diagnoses and Treatable Diseases by
Image-Based Deep Learning,” Cell, vol. 172, no. 5, pp. 1122–1131.e9,
2018. [Online]. Available: https://doi.org/10.1016/j.cell.2018.02.010
12 Stephen, O., Sain, M., Maduh, U. J., and Jeong, D. U., “An Efficient Deep
Learning Approach to Pneumonia Classification in Healthcare,” Journal
of Healthcare Engineering, vol. 2019, 2019.
13 Lakhani, P. and Sundaram, B., “Radiol.2017162326,” vol. 000, no. 0,
2017.
7. 14 Gu, X., Pan, L., Liang, H., and Yang, R., “Classification of Bacterial and
Viral Childhood Pneumonia Using Deep Learning in Chest Radiography,”
pp. 88–93, 2018.
15 Melendez, J., Van Ginneken, B., Maduskar, P., Philipsen, R. H., Re-
ither, K., Breuninger, M., Adetifa, I. M., Maane, R., Ayles, H., and
S´anchez, C. I., “A novel multiple-instance learning-based approach to
computer-aided detection of tuberculosis on chest X-rays,” IEEE Trans-
actions on Medical Imaging, vol. 34, no. 1, pp. 179–192, 2015.
16 Liang, G. and Zheng, L., “A transfer learning method with deep
residual network for pediatric pneumonia diagnosis,” Computer Methods
and Programs in Biomedicine, no. xxxx, 2019. [Online]. Available:
https://doi.org/10.1016/j.cmpb.2019.06.023
17 Rajpurkar, P., Irvin, J., Zhu, K., Yang, B., Mehta, H., Duan, T.,
Ding, D., Bagul, A., Langlotz, C., Shpanskaya, K., Lungren, M. P.,
and Ng, A. Y., “CheXNet: Radiologist-Level Pneumonia Detection on
Chest X-Rays with Deep Learning,” pp. 3–9, 2017. [Online]. Available:
http://arxiv.org/abs/1711.05225
18 Jaiswal, A. K., Tiwari, P., Kumar, S., Gupta, D., Khanna, A., and
Rodrigues, J. J., “Identifying pneumonia in chest X-rays: A deep learning
approach,” Measurement: Journal of the International Measurement
Confederation, vol. 145, pp. 511–518, 2019. [Online]. Available:
https://doi.org/10.1016/j.measurement.2019.05.076
19 Srinivas, M., Roy, D., and Mohan, C. K., “Discriminative feature ex-
traction from X-ray images using deep convolutional neural networks,”
ICASSP, IEEE International Conference on Acoustics, Speech and Signal
Processing - Proceedings, vol. 2016-May, pp. 917–921, 2016.
20 Pardamean, B., Cenggoro, T. W., Rahutomo, R., Budiarto, A.,
and Karuppiah, E. K., “Transfer learning from chest x-ray pre-
trained convolutional neural network for learning mammogram
data,” Procedia Computer Science, vol. 135, no. Breast cancer:
basic and clinical research 9 2015, pp. 400–407, 2018.
[Online]. Available: https://app.dimensions.ai/details/publication/pub.
1106388589andhttps://doi.org/10.1016/j.procs.2018.08.190
21 Heo, S. J., Kim, Y., Yun, S., Lim, S. S., Kim, J., Nam, C. M., Park, E. C.,
Jung, I., and Yoon, J. H., “Deep learning algorithms with demographic
information help to detect tuberculosis in chest radiographs in annual
workers’ health examination data,” International Journal of Environmental
Research and Public Health, vol. 16, no. 2, 2019.
22 Hwang, S., Kim, H.-E., Jeong, J., and Kim, H.-J., “A novel approach for
tuberculosis screening based on deep convolutional neural networks,” 03
2016, p. 97852W.
23 Kieu, P. N., Tran, H. S., Le, T. H., Le, T., and Nguyen, T. T. T.,
“Applying multi-cnns model for detecting abnormal problem on chest
x-ray images,” 2018 10th International Conference on Knowledge and
Systems Engineering (KSE), pp. 300–305, 2018.
24 Wirth, R. and Hipp, J., “CRISP-DM : Towards a Standard Process Model
for Data Mining,” Proceedings of the Fourth International Conference on
the Practical Application of Knowledge Discovery and Data Mining, no.
24959, pp. 29–39, 1995.
25 Carneiro, T., Da Nobrega, R. V. M., Nepomuceno, T., Bian, G. B., De
Albuquerque, V. H. C., and Filho, P. P. R., “Performance Analysis of
Google Colaboratory as a Tool for Accelerating Deep Learning Applica-
tions,” IEEE Access, vol. 6, pp. 61 677–61 685, 2018.
26 Lemley, J., Bazrafkan, S., and Corcoran, P., “Smart Augmentation Learn-
ing an Optimal Data Augmentation Strategy,” IEEE Access, vol. 5, pp.
5858–5869, 2017.
27 Fujita, K., Kobayashi, M., and Nagao, T., “Data Augmentation using Evo-
lutionary Image Processing,” 2018 International Conference on Digital
Image Computing: Techniques and Applications, DICTA 2018, pp. 1–6,
2019.
28 Ling Shao, Fan Zhu, and Xuelong Li, “Transfer Learning for Visual
Categorization: A Survey,” IEEE Transactions on Neural Networks and
Learning Systems, vol. 26, no. 5, pp. 1019–1034, 2014.
29 Shaha, M. and Pawar, M., “Transfer Learning for Image Classification,”
Proceedings of the 2nd International Conference on Electronics, Commu-
nication and Aerospace Technology, ICECA 2018, no. Iceca, pp. 656–660,
2018.
30 Lau, M. M. and Lim, K. H., “Review of adaptive activation function
in deep neural network,” 2018 IEEE EMBS Conference on Biomedical
Engineering and Sciences, IECBES 2018 - Proceedings, pp. 686–690,
2019.
31 Garg, A., Gupta, D., Saxena, S., and Sahadev, P. P., “Validation of Random
Dataset Using an Efficient CNN Model Trained on MNIST Handwritten
Dataset,” 2019 6th International Conference on Signal Processing and
Integrated Networks, SPIN 2019, pp. 602–606, 2019.