A presentation of my MSc in Mathematical Sciences thesis at the African Institute of Mathematical Sciences (AIMS), Rwanda. This presentation explores the application of Deep Transfer Learning towards the diagnosis and classification of traditional pneumonia and pneumonia induced from COVID-19 using chest X-ray images.
The recent COVID-19 pandemic is a major threat to the global population. The healthcare sectors are struggling to cope with rising daily cases due to limited medical supply and lack of facilities. Therefore, there is a need for an alternative efficient diagnosis method for detecting COVID-19. Usually, patients with COVID-19 have a characteristic abnormality in the chest radiography. The use of Chest X-rays not only identifies these abnormalities but also results in a faster diagnosis. In this project, with the aid of transfer learning methods, a convolutional neural network (CNN) architecture-based VGG16 model pre-trained on the ImageNet dataset is used to diagnose patients with COVID-19. The proposed model is trained and tested using a publicly available chest X-ray database. A python-based graphical user interface (GUI) is developed to classify a given chest X-ray either as COVID positive or COVID negative. By proper hyper-parameter tuning, the model is able to provide a training accuracy of 98.72%.
CXR-ACGAN: Auxiliary Classifier GAN for Conditional Generation of Chest X-Ray...Giorgio Carbone
CXR-ACGAN: Auxiliary Classifier GAN (AC-GAN) for Chest X-Ray (CXR) Images Generation (Pneumonia, COVID-19 and healthy patients) for the purpose of data augmentation. Implemented in TensorFlow, trained on COVIDx CXR-3 dataset.
Covid 19 diagnosis using x-ray images and deep learningShamik Tiwari
Researchers developed a convolutional neural network (CNN) model to classify chest X-ray images into three classes: positive for COVID-19, normal, or viral pneumonia. The model was trained on these image sets and achieved 94% accuracy on the training data and 96% on the validation data. When tested, the model achieved 94% accuracy in classifying chest X-ray images into the three classes. The goal was to create a faster and less complex model than previous approaches for detecting COVID-19 in chest images using artificial intelligence.
Developed Project with 3 more colleagues for Pneumonia Detection from Chest X-ray images using Convolutional Neural Network. Used confusion matrix, Recall, Precision for check the model performance on testing Data
Detect COVID-19 with Deep Learning- A survey on Deep Learning for Pulmonary M...JumanaNadir
Who knew Deep Learning can come so handy to us during this period of global crisis?
There has yet been no vaccine or any effective treatment for the 2019 novel Coronavirus (COVID-19), but generative deep learning is helping in detecting and monitoring coronavirus patients by chest CT screening.
A new classification model for covid 19 based on convolutional neural networksAboul Ella Hassanien
This document proposes a new convolutional neural network model based on AlexNet to classify CT chest scans into five categories: normal lung, COVID-19, viral pneumonia, bacterial pneumonia, and mycoplasma pneumonia. The model was trained on a dataset of 5000 CT images across the five categories. Experimental results showed the model achieved over 99% accuracy in classifying the different pneumonia types based on CT scans after 9 epochs of training. The authors conclude the proposed model is effective at distinguishing between the five chest CT image types but further optimization may improve performance.
The recent COVID-19 pandemic is a major threat to the global population. The healthcare sectors are struggling to cope with rising daily cases due to limited medical supply and lack of facilities. Therefore, there is a need for an alternative efficient diagnosis method for detecting COVID-19. Usually, patients with COVID-19 have a characteristic abnormality in the chest radiography. The use of Chest X-rays not only identifies these abnormalities but also results in a faster diagnosis. In this project, with the aid of transfer learning methods, a convolutional neural network (CNN) architecture-based VGG16 model pre-trained on the ImageNet dataset is used to diagnose patients with COVID-19. The proposed model is trained and tested using a publicly available chest X-ray database. A python-based graphical user interface (GUI) is developed to classify a given chest X-ray either as COVID positive or COVID negative. By proper hyper-parameter tuning, the model is able to provide a training accuracy of 98.72%.
CXR-ACGAN: Auxiliary Classifier GAN for Conditional Generation of Chest X-Ray...Giorgio Carbone
CXR-ACGAN: Auxiliary Classifier GAN (AC-GAN) for Chest X-Ray (CXR) Images Generation (Pneumonia, COVID-19 and healthy patients) for the purpose of data augmentation. Implemented in TensorFlow, trained on COVIDx CXR-3 dataset.
Covid 19 diagnosis using x-ray images and deep learningShamik Tiwari
Researchers developed a convolutional neural network (CNN) model to classify chest X-ray images into three classes: positive for COVID-19, normal, or viral pneumonia. The model was trained on these image sets and achieved 94% accuracy on the training data and 96% on the validation data. When tested, the model achieved 94% accuracy in classifying chest X-ray images into the three classes. The goal was to create a faster and less complex model than previous approaches for detecting COVID-19 in chest images using artificial intelligence.
Developed Project with 3 more colleagues for Pneumonia Detection from Chest X-ray images using Convolutional Neural Network. Used confusion matrix, Recall, Precision for check the model performance on testing Data
Detect COVID-19 with Deep Learning- A survey on Deep Learning for Pulmonary M...JumanaNadir
Who knew Deep Learning can come so handy to us during this period of global crisis?
There has yet been no vaccine or any effective treatment for the 2019 novel Coronavirus (COVID-19), but generative deep learning is helping in detecting and monitoring coronavirus patients by chest CT screening.
A new classification model for covid 19 based on convolutional neural networksAboul Ella Hassanien
This document proposes a new convolutional neural network model based on AlexNet to classify CT chest scans into five categories: normal lung, COVID-19, viral pneumonia, bacterial pneumonia, and mycoplasma pneumonia. The model was trained on a dataset of 5000 CT images across the five categories. Experimental results showed the model achieved over 99% accuracy in classifying the different pneumonia types based on CT scans after 9 epochs of training. The authors conclude the proposed model is effective at distinguishing between the five chest CT image types but further optimization may improve performance.
Chest X-ray Pneumonia Classification with Deep LearningBaoTramDuong2
This document discusses using deep learning models to classify chest x-ray images as either normal or pneumonia. It obtained a dataset of over 5,800 pediatric chest x-rays from a Chinese hospital. Various deep learning models were explored, including multilayer perceptrons, convolutional neural networks, and transfer learning with VGG16, which achieved 92% validation accuracy. The document recommends future work such as distinguishing between viral and bacterial pneumonia and combining models with SVM. It also discusses recommendations to reduce childhood pneumonia prevalence.
Covid-19 Detection using Chest X-Ray ImagesIRJET Journal
1) The document discusses using deep learning and machine learning models to detect Covid-19 from chest x-ray images with a high accuracy.
2) Specifically, it evaluates using convolutional neural networks (CNNs) which are well-suited for medical image classification tasks since they can learn spatial relationships within images.
3) Previous studies that developed CNN and other models for Covid detection from chest x-rays are reviewed, finding classification accuracies from 87-99% depending on the dataset and model used.
Application of-image-segmentation-in-brain-tumor-detectionMyat Myint Zu Thin
This document discusses applications of image segmentation in brain tumor detection. It begins by defining brain tumors and different types. It then discusses various image segmentation methods that can be used for brain tumor segmentation, including k-means clustering, region-based watershed algorithm, region growing, and active contour methods. It demonstrates how these methods can be implemented in Python for segmenting tumors from MRI images. The document also discusses computer-aided diagnosis systems and the roles of artificial intelligence and machine learning in medical image analysis and cancer diagnosis using image processing.
Image classification with Deep Neural NetworksYogendra Tamang
This document discusses image classification using deep neural networks. It provides background on image classification and convolutional neural networks. The document outlines techniques like activation functions, pooling, dropout and data augmentation to prevent overfitting. It summarizes a paper on ImageNet classification using CNNs with multiple convolutional and fully connected layers. The paper achieved state-of-the-art results on ImageNet in 2010 and 2012 by training CNNs on a large dataset using multiple GPUs.
Lung Cancer Detection Using Convolutional Neural NetworkIRJET Journal
This document describes a study that uses a convolutional neural network (CNN) to classify lung cancer in CT scans. The CNN model is trained on a dataset of 1018 patient CT scans containing annotations of lung nodules as benign or malignant. The CNN architecture includes convolution layers to extract features, max pooling layers to reduce computations, dropout layers to prevent overfitting, and fully connected layers to classify scans. The model achieves a 65% accuracy on the training set at detecting cancer in new CT scans. The CNN is integrated into a web application to allow doctors to efficiently analyze scans for lung cancer.
Brain Tumor Detection Using Image ProcessingSinbad Konick
The process of brain tumor detection using various filters and finding out the best possible approach. Processing the image and using other filters and find out the result.
details about brain tumor
literature survey on many reference papers related to brain tumor detection using various techniques
our proposed novel methodology for brain tumor detection
Color fundamentals and color models - Digital Image ProcessingAmna
This presentation is based on Color fundamentals and Color models.
~ Introduction to Colors
~ Color in Image Processing
~ Color Fundamentals
~ Color Models
~ RGB Model
~ CMY Model
~ CMYK Model
~ HSI Model
~ HSI and RGB
~ RGB To HSI
~ HSI To RGB
Scene recognition using Convolutional Neural NetworkDhirajGidde
The document discusses scene recognition using convolutional neural networks. It begins with an abstract stating that scene recognition allows context for object recognition. While object recognition has improved due to large datasets and CNNs, scene recognition performance has not reached the same level of success. The document then discusses using a new scene-centric database called Places with over 7 million images to train CNNs for scene recognition. It establishes new state-of-the-art results on several scene datasets and allows visualization of network responses to show differences between object-centric and scene-centric representations.
classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects.we'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded.
This document provides an overview of image processing. It defines image processing as extracting useful information from images through modification and analysis. Some key applications mentioned include astronomy, medicine, biometrics, remote sensing, and personal photos. Essential aspects of image processing include signal processing, matrix theory, and probability theory. The main purposes of image processing are visualization, image enhancement, retrieval, measurement, and recognition. Future developments may integrate optical computing to match or exceed human capabilities in image analysis.
This document provides an overview of digital image processing. It defines what an image is, noting that an image is a spatial representation of a scene represented as an array of pixels. Digital image processing refers to processing digital images on a computer. The key steps in digital image processing are image acquisition, enhancement, restoration, compression, morphological processing, segmentation, representation, and recognition. Digital image processing has many applications including medical imaging, traffic monitoring, biometrics, and computer vision.
Digital image processing involves representing images as arrays of pixels and then processing those pixels to improve or analyze the image. It has applications in fields like medicine, mapping, law enforcement, and human-computer interfaces. The key stages of digital image processing include image acquisition, enhancement, restoration, morphological processing, segmentation, object recognition, representation and description, compression, and color image processing.
The document discusses sources of distortion in underwater images such as light scattering and color change. It proposes a method called Wavelength Compensation and Dehazing (WCID) to enhance underwater image visibility and color fidelity. WCID uses a hazy image formation model and dark channel prior to estimate depth maps and remove haze. It can also detect and remove effects of artificial light sources. The method is shown to outperform other dehazing techniques in experiments by achieving higher signal-to-noise ratios and more robust performance at different water depths.
Lung Cancer Detection using transfer learning.pptx.pdfjagan477830
Lung cancer is one of the deadliest cancers worldwide. However, the early detection of lung cancer significantly improves survival rate. Cancerous (malignant) and noncancerous (benign) pulmonary nodules are the small growths of cells inside the lung. Detection of malignant lung nodules at an early stage is necessary for the crucial prognosis.
BRAIN TUMOR MRI IMAGE SEGMENTATION AND DETECTION IN IMAGE PROCESSINGDharshika Shreeganesh
Image processing is an active research area in which medical image processing is a highly challenging field. Medical imaging
techniques are used to image the inner portions of the human body for medical diagnosis. Brain tumor is a serious life altering
disease condition. Image segmentation plays a significant role in image processing as it helps in the extraction of suspicious regions
from the medical images. In this paper we have proposed segmentation of brain MRI image using K-means clustering algorithm
followed by morphological filtering which avoids the misclustered regions that can inevitably be formed after segmentation of the brain MRI image for detection of tumor location.
Ray tracing is a technique for generating realistic images by tracing the path of light through pixels and simulating interactions with virtual objects. It works by casting rays from the eye through each pixel to find the closest object intersected, then calculating the color returned to the eye along the ray. This produces effects like shadows, reflections, and refractions. While computationally expensive, ray tracing can create highly realistic images and simulate optical effects like cameras. It involves defining objects and light sources, tracing rays through pixels, finding intersections with objects, and computing returned colors.
The document discusses digital image processing and provides an overview of key concepts. It defines digital and analog images and explains how digital images are represented by pixels. It outlines fundamental steps in digital image processing like image acquisition, enhancement, restoration, morphological processing, segmentation, representation, compression and object recognition. It also discusses applications in areas like remote sensing, medical imaging, film and video effects.
This document summarizes several methods for real-time object detection and tracking in video sequences. Traditional methods like absolute differences and census transforms are compared to modern methods like KLT (Lucas-Kanade Technique) and Meanshift. Hardware requirements for real-time tracking like memory, frame rate, and processors are also discussed. The document provides examples of applications for object detection and tracking in traffic monitoring, surveillance, and mobile robotics.
When Classifier Selection meets Information Theory: A Unifying ViewMohamed Farouk
Classifier selection aims to reduce the size of an
ensemble of classifiers in order to improve its efficiency and
classification accuracy. Recently an information-theoretic view
was presented for feature selection. It derives a space of possible
selection criteria and show that several feature selection criteria
in the literature are points within this continuous space. The
contribution of this paper is to export this information-theoretic
view to solve an open issue in ensemble learning which is
classifier selection. We investigated a couple of informationtheoretic
selection criteria that are used to rank classifiers.
This document contains lecture notes on machine learning and deep learning. It discusses regression, classification, and neural networks. For regression and classification, it presents the optimal functions that minimize error and relates them to conditional expectations. It also provides bounds on the generalization error of functions learned through empirical risk minimization. For neural networks, it discusses their ability to approximate functions and bounds the VC-dimension of neural networks with multiple hidden layers.
Chest X-ray Pneumonia Classification with Deep LearningBaoTramDuong2
This document discusses using deep learning models to classify chest x-ray images as either normal or pneumonia. It obtained a dataset of over 5,800 pediatric chest x-rays from a Chinese hospital. Various deep learning models were explored, including multilayer perceptrons, convolutional neural networks, and transfer learning with VGG16, which achieved 92% validation accuracy. The document recommends future work such as distinguishing between viral and bacterial pneumonia and combining models with SVM. It also discusses recommendations to reduce childhood pneumonia prevalence.
Covid-19 Detection using Chest X-Ray ImagesIRJET Journal
1) The document discusses using deep learning and machine learning models to detect Covid-19 from chest x-ray images with a high accuracy.
2) Specifically, it evaluates using convolutional neural networks (CNNs) which are well-suited for medical image classification tasks since they can learn spatial relationships within images.
3) Previous studies that developed CNN and other models for Covid detection from chest x-rays are reviewed, finding classification accuracies from 87-99% depending on the dataset and model used.
Application of-image-segmentation-in-brain-tumor-detectionMyat Myint Zu Thin
This document discusses applications of image segmentation in brain tumor detection. It begins by defining brain tumors and different types. It then discusses various image segmentation methods that can be used for brain tumor segmentation, including k-means clustering, region-based watershed algorithm, region growing, and active contour methods. It demonstrates how these methods can be implemented in Python for segmenting tumors from MRI images. The document also discusses computer-aided diagnosis systems and the roles of artificial intelligence and machine learning in medical image analysis and cancer diagnosis using image processing.
Image classification with Deep Neural NetworksYogendra Tamang
This document discusses image classification using deep neural networks. It provides background on image classification and convolutional neural networks. The document outlines techniques like activation functions, pooling, dropout and data augmentation to prevent overfitting. It summarizes a paper on ImageNet classification using CNNs with multiple convolutional and fully connected layers. The paper achieved state-of-the-art results on ImageNet in 2010 and 2012 by training CNNs on a large dataset using multiple GPUs.
Lung Cancer Detection Using Convolutional Neural NetworkIRJET Journal
This document describes a study that uses a convolutional neural network (CNN) to classify lung cancer in CT scans. The CNN model is trained on a dataset of 1018 patient CT scans containing annotations of lung nodules as benign or malignant. The CNN architecture includes convolution layers to extract features, max pooling layers to reduce computations, dropout layers to prevent overfitting, and fully connected layers to classify scans. The model achieves a 65% accuracy on the training set at detecting cancer in new CT scans. The CNN is integrated into a web application to allow doctors to efficiently analyze scans for lung cancer.
Brain Tumor Detection Using Image ProcessingSinbad Konick
The process of brain tumor detection using various filters and finding out the best possible approach. Processing the image and using other filters and find out the result.
details about brain tumor
literature survey on many reference papers related to brain tumor detection using various techniques
our proposed novel methodology for brain tumor detection
Color fundamentals and color models - Digital Image ProcessingAmna
This presentation is based on Color fundamentals and Color models.
~ Introduction to Colors
~ Color in Image Processing
~ Color Fundamentals
~ Color Models
~ RGB Model
~ CMY Model
~ CMYK Model
~ HSI Model
~ HSI and RGB
~ RGB To HSI
~ HSI To RGB
Scene recognition using Convolutional Neural NetworkDhirajGidde
The document discusses scene recognition using convolutional neural networks. It begins with an abstract stating that scene recognition allows context for object recognition. While object recognition has improved due to large datasets and CNNs, scene recognition performance has not reached the same level of success. The document then discusses using a new scene-centric database called Places with over 7 million images to train CNNs for scene recognition. It establishes new state-of-the-art results on several scene datasets and allows visualization of network responses to show differences between object-centric and scene-centric representations.
classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects.we'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded.
This document provides an overview of image processing. It defines image processing as extracting useful information from images through modification and analysis. Some key applications mentioned include astronomy, medicine, biometrics, remote sensing, and personal photos. Essential aspects of image processing include signal processing, matrix theory, and probability theory. The main purposes of image processing are visualization, image enhancement, retrieval, measurement, and recognition. Future developments may integrate optical computing to match or exceed human capabilities in image analysis.
This document provides an overview of digital image processing. It defines what an image is, noting that an image is a spatial representation of a scene represented as an array of pixels. Digital image processing refers to processing digital images on a computer. The key steps in digital image processing are image acquisition, enhancement, restoration, compression, morphological processing, segmentation, representation, and recognition. Digital image processing has many applications including medical imaging, traffic monitoring, biometrics, and computer vision.
Digital image processing involves representing images as arrays of pixels and then processing those pixels to improve or analyze the image. It has applications in fields like medicine, mapping, law enforcement, and human-computer interfaces. The key stages of digital image processing include image acquisition, enhancement, restoration, morphological processing, segmentation, object recognition, representation and description, compression, and color image processing.
The document discusses sources of distortion in underwater images such as light scattering and color change. It proposes a method called Wavelength Compensation and Dehazing (WCID) to enhance underwater image visibility and color fidelity. WCID uses a hazy image formation model and dark channel prior to estimate depth maps and remove haze. It can also detect and remove effects of artificial light sources. The method is shown to outperform other dehazing techniques in experiments by achieving higher signal-to-noise ratios and more robust performance at different water depths.
Lung Cancer Detection using transfer learning.pptx.pdfjagan477830
Lung cancer is one of the deadliest cancers worldwide. However, the early detection of lung cancer significantly improves survival rate. Cancerous (malignant) and noncancerous (benign) pulmonary nodules are the small growths of cells inside the lung. Detection of malignant lung nodules at an early stage is necessary for the crucial prognosis.
BRAIN TUMOR MRI IMAGE SEGMENTATION AND DETECTION IN IMAGE PROCESSINGDharshika Shreeganesh
Image processing is an active research area in which medical image processing is a highly challenging field. Medical imaging
techniques are used to image the inner portions of the human body for medical diagnosis. Brain tumor is a serious life altering
disease condition. Image segmentation plays a significant role in image processing as it helps in the extraction of suspicious regions
from the medical images. In this paper we have proposed segmentation of brain MRI image using K-means clustering algorithm
followed by morphological filtering which avoids the misclustered regions that can inevitably be formed after segmentation of the brain MRI image for detection of tumor location.
Ray tracing is a technique for generating realistic images by tracing the path of light through pixels and simulating interactions with virtual objects. It works by casting rays from the eye through each pixel to find the closest object intersected, then calculating the color returned to the eye along the ray. This produces effects like shadows, reflections, and refractions. While computationally expensive, ray tracing can create highly realistic images and simulate optical effects like cameras. It involves defining objects and light sources, tracing rays through pixels, finding intersections with objects, and computing returned colors.
The document discusses digital image processing and provides an overview of key concepts. It defines digital and analog images and explains how digital images are represented by pixels. It outlines fundamental steps in digital image processing like image acquisition, enhancement, restoration, morphological processing, segmentation, representation, compression and object recognition. It also discusses applications in areas like remote sensing, medical imaging, film and video effects.
This document summarizes several methods for real-time object detection and tracking in video sequences. Traditional methods like absolute differences and census transforms are compared to modern methods like KLT (Lucas-Kanade Technique) and Meanshift. Hardware requirements for real-time tracking like memory, frame rate, and processors are also discussed. The document provides examples of applications for object detection and tracking in traffic monitoring, surveillance, and mobile robotics.
Similar to Transfer Learning for the Detection and Classification of traditional pneumonia and pneumonia induced by the COVID-19 from Chest X-ray Images
When Classifier Selection meets Information Theory: A Unifying ViewMohamed Farouk
Classifier selection aims to reduce the size of an
ensemble of classifiers in order to improve its efficiency and
classification accuracy. Recently an information-theoretic view
was presented for feature selection. It derives a space of possible
selection criteria and show that several feature selection criteria
in the literature are points within this continuous space. The
contribution of this paper is to export this information-theoretic
view to solve an open issue in ensemble learning which is
classifier selection. We investigated a couple of informationtheoretic
selection criteria that are used to rank classifiers.
This document contains lecture notes on machine learning and deep learning. It discusses regression, classification, and neural networks. For regression and classification, it presents the optimal functions that minimize error and relates them to conditional expectations. It also provides bounds on the generalization error of functions learned through empirical risk minimization. For neural networks, it discusses their ability to approximate functions and bounds the VC-dimension of neural networks with multiple hidden layers.
On New Root Finding Algorithms for Solving Nonlinear Transcendental EquationsAI Publications
In this paper, we present new iterative algorithms to find a root of the given nonlinear transcendental equations. In the proposed algorithms, we use nonlinear Taylor’s polynomial interpolation and a modified error correction term with a fixed-point concept. We also investigated for possible extension of the higher order iterative algorithms in single variable to higher dimension. Several numerical examples are presented to illustrate the proposed algorithms.
This document proposes a method for weakly supervised regression on uncertain datasets. It combines graph Laplacian regularization and cluster ensemble methodology. The method solves an auxiliary minimization problem to determine the optimal solution for predicting uncertain parameters. It is tested on artificial data to predict target values using a mixture of normal distributions with labeled, inaccurately labeled, and unlabeled samples. The method is shown to outperform a simplified version by reducing mean Wasserstein distance between predicted and true values.
Surrogate models emulate expensive computer simulations. The objective is to approximate a function, $f$, of $d$ variables to a given tolerance, $\varepsilon$, using as few function values as possible, preferably $O(d)$. We explain how tractability theory provides lower bounds on the number of function values required for any possible method. We also propose method for sampling $f$ and approximating $f$ that achieves this objective and the kind of underlying structure that $f$ must have for success.
The document discusses using unusual data sources in insurance. It provides examples of using pictures, text, social media data, telematics, and satellite imagery in insurance. It also discusses challenges in analyzing complex and high-dimensional data from these sources and introduces machine learning tools like PCA, generalized linear models, and evaluating models using loss, risk, and cross-validation.
The numerical solution of Huxley equation by the use of two finite difference methods is done. The first one is the explicit scheme and the second one is the Crank-Nicholson scheme. The comparison between the two methods showed that the explicit scheme is easier and has faster convergence while the Crank-Nicholson scheme is more accurate. In addition, the stability analysis using Fourier (von Neumann) method of two schemes is investigated. The resulting analysis showed that the first scheme
is conditionally stable if, r ≤ 2 − aβ∆t , ∆t ≤ 2(∆x)2 and the second
scheme is unconditionally stable.
Accelerating Metropolis Hastings with Lightweight Inference CompilationFeynman Liang
This document summarizes research on accelerating Metropolis-Hastings sampling with lightweight inference compilation. It discusses background on probabilistic programming languages and Bayesian inference techniques like variational inference and sequential importance sampling. It introduces the concept of inference compilation, where a neural network is trained to construct proposals for MCMC that better match the posterior. The paper proposes a lightweight approach to inference compilation for imperative probabilistic programs that trains proposals conditioned on execution prefixes to address issues with sequential importance sampling.
AIMS Block Presentation]{Deep Transfer Learning for Magnetic Resonance Image ...Yusuf Brima
This paper adopted a Deep Residual Convolutional Neural Network (ResNet50) architecture for the experiments amongst other discriminative learning techniques to train the model. Using the novel dataset and two publicly available MRI brain datasets, this proposed approach attained a classification accuracy of 86.40\% on the proposed dataset, 93.80% on the Harvard Whole Brain Atlas, and 97.05% accuracy on the School of Biomedical Engineering dataset. Our experimental results significantly demonstrate our proposed framework for Transfer Learning is a potential and effective approach for brain tumour multi-classification tasks.
Fosdem 2013 petra selmer flexible querying of graph dataPetra Selmer
These are the slides from a talk I presented at the Graph Processing room at FOSDEM 2013, in which I discussed my PhD topic: a query language allowing for the flexible querying of complex paths within graph structured data
Statistics (1): estimation, Chapter 2: Empirical distribution and bootstrapChristian Robert
The document discusses the bootstrap method and its applications in statistical inference. It introduces the bootstrap as a technique for estimating properties of estimators like variance and distribution when the true sampling distribution is unknown. This is done by treating the observed sample as if it were the population and resampling with replacement to create new simulated samples. The bootstrap then approximates characteristics of the sampling distribution, allowing inferences like confidence intervals to be constructed.
Bayesian inference for mixed-effects models driven by SDEs and other stochast...Umberto Picchini
An important, and well studied, class of stochastic models is given by stochastic differential equations (SDEs). In this talk, we consider Bayesian inference based on measurements from several individuals, to provide inference at the "population level" using mixed-effects modelling. We consider the case where dynamics are expressed via SDEs or other stochastic (Markovian) models. Stochastic differential equation mixed-effects models (SDEMEMs) are flexible hierarchical models that account for (i) the intrinsic random variability in the latent states dynamics, as well as (ii) the variability between individuals, and also (iii) account for measurement error. This flexibility gives rise to methodological and computational difficulties.
Fully Bayesian inference for nonlinear SDEMEMs is complicated by the typical intractability of the observed data likelihood which motivates the use of sampling-based approaches such as Markov chain Monte Carlo. A Gibbs sampler is proposed to target the marginal posterior of all parameters of interest. The algorithm is made computationally efficient through careful use of blocking strategies, particle filters (sequential Monte Carlo) and correlated pseudo-marginal approaches. The resulting methodology is is flexible, general and is able to deal with a large class of nonlinear SDEMEMs [1]. In a more recent work [2], we also explored ways to make inference even more scalable to an increasing number of individuals, while also dealing with state-space models driven by other stochastic dynamic models than SDEs, eg Markov jump processes and nonlinear solvers typically used in systems biology.
[1] S. Wiqvist, A. Golightly, AT McLean, U. Picchini (2020). Efficient inference for stochastic differential mixed-effects models using correlated particle pseudo-marginal algorithms, CSDA, https://doi.org/10.1016/j.csda.2020.107151
[2] S. Persson, N. Welkenhuysen, S. Shashkova, S. Wiqvist, P. Reith, G. W. Schmidt, U. Picchini, M. Cvijovic (2021). PEPSDI: Scalable and flexible inference framework for stochastic dynamic single-cell models, bioRxiv doi:10.1101/2021.07.01.450748.
This document provides practice exercises for an introduction to research in information studies course. It includes questions on defining statistical terms, computing descriptive statistics like mean, median and mode for sample data, generating frequency distributions and histograms, hypothesis testing, and constructing confidence intervals. The exercises cover topics like measures of central tendency and dispersion, probability distributions, sampling distributions, and both descriptive and inferential statistics.
This document discusses various methods for estimating normalizing constants that arise when evaluating integrals numerically. It begins by noting there are many computational methods for approximating normalizing constants across different communities. It then lists the topics that will be covered in the upcoming workshop, including discussions on estimating constants using Monte Carlo methods and Bayesian versus frequentist approaches. The document provides examples of estimating normalizing constants using Monte Carlo integration, reverse logistic regression, and Xiao-Li Meng's maximum likelihood estimation approach. It concludes by discussing some of the challenges in bringing a statistical framework to constant estimation problems.
This document discusses nested sampling, a technique for Bayesian computation and evidence evaluation. It begins by introducing Bayesian inference and the evidence integral. It then shows that nested sampling transforms the multidimensional evidence integral into a one-dimensional integral over the prior mass constrained to have likelihood above a given value. The document outlines the nested sampling algorithm and shows that it provides samples from the posterior distribution. It also discusses termination criteria and choices of sample size for the algorithm. Finally, it provides a numerical example of nested sampling applied to a Gaussian model.
A Statistical Perspective on Retrieval-Based Models.pdfPo-Chuan Chen
This paper presents a statistical perspective on retrieval-based models for classification. It analyzes such models using two different frameworks: local empirical risk minimization and classification in an extended feature space. For local empirical risk minimization, the paper provides assumptions and derives an excess risk bound that decomposes the error of the local model into different terms related to the local vs global optimal risk, sample vs retrieved set risk, generalization error of the local model, and central absolute moment of the local model. It also shows how to tighten the bound by leveraging the local structure of the data distribution.
Research internship on optimal stochastic theory with financial application u...Asma Ben Slimene
This is a presntation of my second year intership on optimal stochastic theory and how we can apply it on some financial application then how we can solve such problems using finite differences methods!
Enjoy it !
Presentation on stochastic control problem with financial applications (Merto...Asma Ben Slimene
This is an introductory to optimal stochastic control theory with two applications in finance: Merton portfolio problem and Investement/consumption problem with numerical results using finite differences approach
Remote-sensing data offer unprecedented opportunities to address Earth-system-science challenges, such as understanding the relationship between the atmosphere and Earth's surface using physics, chemistry, biology, mathematics, and computing. Statistical methods have often been seen as a hybrid of the latter two, so that a lot of attention has been given to computing estimates but far less to quantifying the uncertainty of the estimates. In my "bird's-eye view," I shall give a way to look at the problem using conditional probability models and three states of knowledge. Examples will be given of analyzing remotely sensed data of a leading greenhouse gas, carbon dioxide.
Hierarchical matrices for approximating large covariance matries and computin...Alexander Litvinenko
I do overview of methods for computing KL expansion. For tensor grid one can use high-dimensional Fourier, for non-tensor --- hierarchical matrices.
Similar to Transfer Learning for the Detection and Classification of traditional pneumonia and pneumonia induced by the COVID-19 from Chest X-ray Images (20)
This document outlines the history and future of intelligence and artificial intelligence. It discusses intelligence from early human myths and legends involving intelligent creations to modern advances in computer science and AI technologies. The document is divided into four parts that cover the primal desire for intelligence, the cognitive revolution through human history, the development of AI from early automatons to modern deep learning systems, and the road ahead in ensuring AI is developed safely and for the benefit of humanity.
Guides to Securing Scholarships OverseasYusuf Brima
This short talk distills the roadmap toward getting a scholarship to study abroad. It touches on the key ideas to make you stand out and the pitfalls to avoid in your hunt for financial aid to study abroad.
African Accents International Institute (AAII-SL): Work overviewYusuf Brima
African Accents International Institute (AAII-SL) is a grassroots NGO with a vision of creating a literate (knowledge) society that contributes to the economic, social, political and cultural growth of Sierra Leone.
The art gallery problem is formulated in geometry as the minimum number of guards that need to be placed in an n-vertex simple polygon such that all points of the interior are visible. Visibility is defined such that two points u and v are mutually visible if the line segment joining them lies inside the polygon
This document provides an overview of an extension training programme in information technology at the University of Makeni. It outlines the course topics which include HTML, CSS, JavaScript, PHP, SQL, and security best practices. It discusses classroom codes, communication methods, prerequisites, and required tools. The course will use theoretical frameworks, hands-on labs, assignments, and projects. It will also provide a brief history of the internet and cover topics like HTTP, URLs, web browsers, search engines, domain registration, and web hosting options.
Big data for healthcare analytics final -v0.3 mizYusuf Brima
This document provides an overview of sources of big data in healthcare and their applications. It discusses traditional sources like medical claims, electronic health records, and medical imaging. It also examines emerging sources like internet of things sensor data, social media data, mobile network data, and satellite imagery. The document outlines how these diverse data sources can be used for applications like personalized healthcare, disease surveillance, disaster management, and climate change adaptation. It concludes that big data opens new opportunities to improve healthcare through right interventions for patients. However, issues around data representativeness and bias must be addressed.
Detecting malaria using a deep convolutional neural networkYusuf Brima
Experiment with Deep Residual Convolutional Neural Network to classify microscopic blood cell images (Uninfected, Parasitized)
Utiling ResNet,Deep Residual Learning for Image Recognition (He et al, 2015) architecture.
Uses Keras with a Tensorflow backend.
Leveraging Generative AI to Drive Nonprofit InnovationTechSoup
In this webinar, participants learned how to utilize Generative AI to streamline operations and elevate member engagement. Amazon Web Service experts provided a customer specific use cases and dived into low/no-code tools that are quick and easy to deploy through Amazon Web Service (AWS.)
हिंदी वर्णमाला पीपीटी, hindi alphabet PPT presentation, hindi varnamala PPT, Hindi Varnamala pdf, हिंदी स्वर, हिंदी व्यंजन, sikhiye hindi varnmala, dr. mulla adam ali, hindi language and literature, hindi alphabet with drawing, hindi alphabet pdf, hindi varnamala for childrens, hindi language, hindi varnamala practice for kids, https://www.drmullaadamali.com
How to Make a Field Mandatory in Odoo 17Celine George
In Odoo, making a field required can be done through both Python code and XML views. When you set the required attribute to True in Python code, it makes the field required across all views where it's used. Conversely, when you set the required attribute in XML views, it makes the field required only in the context of that particular view.
A workshop hosted by the South African Journal of Science aimed at postgraduate students and early career researchers with little or no experience in writing and publishing journal articles.
How to Fix the Import Error in the Odoo 17Celine George
An import error occurs when a program fails to import a module or library, disrupting its execution. In languages like Python, this issue arises when the specified module cannot be found or accessed, hindering the program's functionality. Resolving import errors is crucial for maintaining smooth software operation and uninterrupted development processes.
LAND USE LAND COVER AND NDVI OF MIRZAPUR DISTRICT, UPRAHUL
This Dissertation explores the particular circumstances of Mirzapur, a region located in the
core of India. Mirzapur, with its varied terrains and abundant biodiversity, offers an optimal
environment for investigating the changes in vegetation cover dynamics. Our study utilizes
advanced technologies such as GIS (Geographic Information Systems) and Remote sensing to
analyze the transformations that have taken place over the course of a decade.
The complex relationship between human activities and the environment has been the focus
of extensive research and worry. As the global community grapples with swift urbanization,
population expansion, and economic progress, the effects on natural ecosystems are becoming
more evident. A crucial element of this impact is the alteration of vegetation cover, which plays a
significant role in maintaining the ecological equilibrium of our planet.Land serves as the foundation for all human activities and provides the necessary materials for
these activities. As the most crucial natural resource, its utilization by humans results in different
'Land uses,' which are determined by both human activities and the physical characteristics of the
land.
The utilization of land is impacted by human needs and environmental factors. In countries
like India, rapid population growth and the emphasis on extensive resource exploitation can lead
to significant land degradation, adversely affecting the region's land cover.
Therefore, human intervention has significantly influenced land use patterns over many
centuries, evolving its structure over time and space. In the present era, these changes have
accelerated due to factors such as agriculture and urbanization. Information regarding land use and
cover is essential for various planning and management tasks related to the Earth's surface,
providing crucial environmental data for scientific, resource management, policy purposes, and
diverse human activities.
Accurate understanding of land use and cover is imperative for the development planning
of any area. Consequently, a wide range of professionals, including earth system scientists, land
and water managers, and urban planners, are interested in obtaining data on land use and cover
changes, conversion trends, and other related patterns. The spatial dimensions of land use and
cover support policymakers and scientists in making well-informed decisions, as alterations in
these patterns indicate shifts in economic and social conditions. Monitoring such changes with the
help of Advanced technologies like Remote Sensing and Geographic Information Systems is
crucial for coordinated efforts across different administrative levels. Advanced technologies like
Remote Sensing and Geographic Information Systems
9
Changes in vegetation cover refer to variations in the distribution, composition, and overall
structure of plant communities across different temporal and spatial scales. These changes can
occur natural.
Main Java[All of the Base Concepts}.docxadhitya5119
This is part 1 of my Java Learning Journey. This Contains Custom methods, classes, constructors, packages, multithreading , try- catch block, finally block and more.
How to Manage Your Lost Opportunities in Odoo 17 CRMCeline George
Odoo 17 CRM allows us to track why we lose sales opportunities with "Lost Reasons." This helps analyze our sales process and identify areas for improvement. Here's how to configure lost reasons in Odoo 17 CRM
This presentation includes basic of PCOS their pathology and treatment and also Ayurveda correlation of PCOS and Ayurvedic line of treatment mentioned in classics.
Transfer Learning for the Detection and Classification of traditional pneumonia and pneumonia induced by the COVID-19 from Chest X-ray Images
1. Problem Objectives Learning Methodology Dataset Simulation Results Conclusion References
Transfer Learning for the Detection and
Classification of traditional pneumonia and
pneumonia induced by the COVID-19 from Chest
X-ray Images
Yusuf Brima
Supervised by
Dr. Marcellin Atemkeng
Dr. Stive Roussel Tankio Djiokap
August 9, 2021
3. Problem Objectives Learning Methodology Dataset Simulation Results Conclusion References
Research Problem
Figure 1: Map for Coronavirus-related incidence rate across the globe reported
to Johns Hopkins University on June 17, 2021 (source: Johns Hopkins
University).
4. Problem Objectives Learning Methodology Dataset Simulation Results Conclusion References
Research Problem
I There are various molecular and serologic assays to test Severe
Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2)
5. Problem Objectives Learning Methodology Dataset Simulation Results Conclusion References
Research Problem
I There are various molecular and serologic assays to test the Severe
Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2)
I Reverse Transcriptase-Polymerase Chain Reaction (RT–PCR) is the
laboratory standard for the SARS-CoV-2 testing.
6. Problem Objectives Learning Methodology Dataset Simulation Results Conclusion References
Research Problem
I There are various molecular and serologic assays to test the Severe
Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2)
I Reverse Transcriptase-Polymerase Chain Reaction (RT–PCR) is the
laboratory standard for the SARS-CoV-2 testing.
I RT-PCR has very high falsely negative rate.
7. Problem Objectives Learning Methodology Dataset Simulation Results Conclusion References
Research Problem
I There are various molecular and serologic assays to test the Severe
Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2)
I Reverse Transcriptase-Polymerase Chain Reaction (RT–PCR) is the
laboratory standard for the SARS-CoV-2 testing.
I RT-PCR has very high falsely negative rate.
I RT-PCR testing is very time-consuming. and presents a slew of
laboratory logistical challenges.
8. Problem Objectives Learning Methodology Dataset Simulation Results Conclusion References
Research Problem
Figure 2: Map of Coronavirus-related confirmed deaths per 100,000 population
across the globe with a total of 3,861,121 as reported to Johns Hopkins
University on June 17, 2021 (source: Wikipedia).
9. Problem Objectives Learning Methodology Dataset Simulation Results Conclusion References
Research Objectives
I To detect and classify traditional pneumonia and pneumonia induced
by the SARS-CoV-2 virus using Chest X-ray scans.
10. Problem Objectives Learning Methodology Dataset Simulation Results Conclusion References
Research Objectives
I To detect and classify traditional pneumonia and pneumonia induced
by the SARS-CoV-2 virus using Chest X-ray scans.
I For safe, accurate, less cumbersome and timely diagnosis.
11. Problem Objectives Learning Methodology Dataset Simulation Results Conclusion References
Research Objectives
I To detect and classify traditional pneumonia and pneumonia induced
by the SARS-CoV-2 virus using Chest X-ray scans.
I For safe, accurate, less cumbersome and timely diagnosis.
I Using Deep Transfer Learning
12. Problem Objectives Learning Methodology Dataset Simulation Results Conclusion References
Learning Background
Figure 3: Hierarchy of Learning in Intelligent Machines.
13. Problem Objectives Learning Methodology Dataset Simulation Results Conclusion References
Learning Background
UNKNOWN TARGET FUNCTION
TRAINING SAMPLES
LEARNING
ALGORITHM
HYPOTHESIS SET
FINAL HYPOTHESIS
Figure 4: A framework for supervised learning [1].
14. Problem Objectives Learning Methodology Dataset Simulation Results Conclusion References
Learning Background
General Learning Setting
I A function estimation model such that we have a generator of
random vector x from a probability distribution P(x) which is
unknown.
I A process that maps the vector x to the output vector y according
to an unknown conditional probability distribution P(y|x)
P(y, x) = P(y|x)P(x). (1)
I Given a learning setting T ,
T := {H, P(Z), L} , (2)
where H ⊂ YX
, the hypotheses space of learnable models; P(Z) is
the probability measure of examples, that is:
Z := {(x1, y1), (x2, y2), . . . , (xm, ym)}
15. Problem Objectives Learning Methodology Dataset Simulation Results Conclusion References
Learning Background
General Learning Setting
I L is the loss function such that:
L : H × Z → R
LCE = −
n
X
i=1
yi log(fθ(xi )),
Lmse =
1
2
n
X
i=1
(yi − fθ(xi ))
2
,
I Risk Functional R:
R(θ) =
Z
L(y, fθ(x))dP(x, y), ∀θ ∈ Θ. (3)
16. Problem Objectives Learning Methodology Dataset Simulation Results Conclusion References
Learning Background
General Learning Setting
T is a learnable setting if the corresponding hypotheses space H is
learnable, if H is a VC dimension n, we therefore say T has a VC
dimension n.
f ∗
= min
fθ∈H
E[L(y, f (x))]
R̂m(θ) =
1
m
m
X
i=1
L(yi , fθ(xi )) (4)
The Empirical Risk Minimization (ERM) Induction Principle posits that
as m, the number of training samples gets larger,
R̂m(θ)
m→∞
= R(θ)
17. Problem Objectives Learning Methodology Dataset Simulation Results Conclusion References
Learning Background
Generalization Error bound
The non-convexity of the loss objective makes deep learning a Hadamard
ill-posed problem. From a statistical learning theory standpoint, these
networks has a Generalization Error bound GE(θ) as stated thus:
GE(θ) = |R(θ) − R̂m(θ)|
18. Problem Objectives Learning Methodology Dataset Simulation Results Conclusion References
Learning Background
Learning Representations and Task
Given K layers and an input vector x ∈ Rd
Let k = 1 . . . K and a non-linear activation function φ(.)
Thus, the transformation at layer k is: xk = φk (xk−1W k
) where
W k
= φk−1(xk−1W k−1
).
Generally, a deep neural network is:
Φ(x, W 1
, . . . , W K
) = φK (φK−1(. . . φ2(φ1(x, W 1
)W 2
) . . .)W K−1
)W K
),
φ(.) can be:
φ(x) = tanh(x),
φ(x) = max{0, x},
φ(x) =
1
1 + e−x
,
and many more.
19. Problem Objectives Learning Methodology Dataset Simulation Results Conclusion References
Learning Background
Learning Representations and Task
Given {(xi , yi )}
N
i=1, xi ∈ Rd
and yi ∈ {0, 1} for a classification and
yi ∈ R for regression.
Therefore,
Φ∗
(W) = argmin
{W k }K
k=1
L(Y , Φ(x, W 1
, . . . , W K
)) + λΘ(W 1
, . . . , W K
),
where λ > 0, and
Θ(W) =
K
X
k=1
||W k
||2
.
21. Problem Objectives Learning Methodology Dataset Simulation Results Conclusion References
Transfer Learning
Formal Definition
D := {X, P(X)}
X = {x1, x2, x3, . . . , xn}, ∀xi ∈ X
Formal Definition
For domain D, a task is defined as:
T := {Y, P(Y |X)}
Y = {y1, y2, y3, . . . , yn}, ∀yi ∈ Y
23. Problem Objectives Learning Methodology Dataset Simulation Results Conclusion References
Transfer Learning
The goal of Transfer Learning
Given
f : X 7→ Y where f ∈ H
∼
X = argmin
fθ∈H
{L(f (XSi
) 6= YSi
)}
And
RDT
:= P(η(XT ) 6= yT |
∼
X)
f ∗
= argmin
f ∈H
{RDT
(fθ(XT ), YT ,
∼
X)}
24. Problem Objectives Learning Methodology Dataset Simulation Results Conclusion References
Transfer Learning
Input Layer ℝ¹²
∈ Hidden Layer ℝ¹²
∈ Output Layer ℝ⁴
∈
Output
Input
Conv-1
Conv-2
Conv-3
Conv-
4
...
Conv-n
Standard Convolutional Neural Network Architecture
1
2
...
2
n
Figure 5: Standard Convolutional Neural Network.
25. Problem Objectives Learning Methodology Dataset Simulation Results Conclusion References
Transfer Learning
Convolution Operation for 2D
s(i, j) = (K ∗ I)(i, j) =
X
m
X
n
I(i + m, j + n)K(m, n),
Convolution Dimension
O =
W − K + 2P
S
+ 1.
27. Problem Objectives Learning Methodology Dataset Simulation Results Conclusion References
Transfer Learning
Data Augmentation
zooming/flipping
/rotation etc
CNN feature
extraction layers
of ResNet50
Training the
Dense layers 1
2
3
4
Output
Input Covid-19
Lung
Opacity
Normal
(Healthy)
Viral
Pneumonia
Figure 7: The schematic represents the proposed system model.
28. Problem Objectives Learning Methodology Dataset Simulation Results Conclusion References
Transfer Learning
Data Augmentation
zooming/flipping
/rotation etc
CNN feature
extraction layers
of ResNet50
Training the
Dense layers 1
2
3
4
Output
Input Covid-19
Lung
Opacity
Normal
(Healthy)
Viral
Pneumonia
Figure 8: Deep Transfer Learning Stages
29. Problem Objectives Learning Methodology Dataset Simulation Results Conclusion References
Dataset
Normal Lung_Opacity COVID Viral Pneumonia
Class Type - Diagnosis
48.2%
28.4%
17.1%
6.4%
Number of Sample X-Ray Images per Class
Figure 9: A histogram of the distribution of the X-Ray Images per Class. The
total dataset is 18,865.
30. Problem Objectives Learning Methodology Dataset Simulation Results Conclusion References
Dataset
0 1 2 3 4 5 6
Min
0.0
0.2
0.4
0.6
0.8
Density
Images Colour Min Value Distribution by Class
Class
Lung_Opacity
Viral Pneumonia
Normal
COVID
(a)
0 50 100 150 200 250
Mean
0.000
0.002
0.004
0.006
0.008
0.010
Density
Image Color Mean Value Distribution by Class
Class
Lung_Opacity
Viral Pneumonia
Normal
COVID
(b)
160 180 200 220 240 260
Max
0.000
0.005
0.010
0.015
0.020
0.025
0.030
0.035
0.040
Density
Images Colour Max Value Distribution by Class
Class
Lung_Opacity
Viral Pneumonia
Normal
COVID
(c)
Figure 10: We present the min, mean and max RGB color intensity
distributions for the four X-ray image classes.
31. Problem Objectives Learning Methodology Dataset Simulation Results Conclusion References
Dataset
Figure 11: X-ray image format where the upper right zoomed illustration
indicates the RGB color channels.
32. Problem Objectives Learning Methodology Dataset Simulation Results Conclusion References
Dataset
Formal Definition
x =
1
Ic IhIw
Ic
X
i
Ih
X
j
Iw
X
k
xijk (8)
where Ic is the number of color channels, Ih is the height of the image
and Iw is the width of the image.
σ =
v
u
u
u
t
1
Ic IhIw
Ic IhIw
X
i
Ic
X
j
Ih
X
k
Iw
X
l
xjkl − x
2
(9)
33. Problem Objectives Learning Methodology Dataset Simulation Results Conclusion References
Dataset
COVID
Lung_Opacity
Normal
Viral Pneumonia
(a)
COVID
Lung_Opacity
Normal
Viral Pneumonia
(b)
Figure 12: A comparison illustrating a plot of the 3 colors channels (Left plot)
and single channel in (Right plot).
34. Problem Objectives Learning Methodology Dataset Simulation Results Conclusion References
Dataset
25 50 75 100 125 150 175 200 225
ImageChannelColourMean
20
40
60
80
100
Image
Channel
Colour
Standard
Deviation
MeanandStandardDeviationofImageSamples
Class
Lung_Opacity
ViralPneumonia
Normal
COVID
(a)
25 50 75 100 125 150 175 200 225
ImageChannelColourMean
20
40
60
80
100
Image
Channel
Colour
Standard
Deviation
MeanandStandardDeviationofImageSamples-10%ofData
(b)
Figure 13: A side-by-side comparison of the dataset clusters using image mean
and standard deviation.
35. Problem Objectives Learning Methodology Dataset Simulation Results Conclusion References
Dataset
25 50 75 100 125 150 175 200 225
Mean
20
40
60
80
100
Standard
Deviation
Lung_Opacity
25 50 75 100 125 150 175 200 225
Mean
ViralPneumonia
25 50 75 100 125 150 175 200 225
Mean
Normal
25 50 75 100 125 150 175 200 225
Mean
COVID
Figure 14: Individual class distributions for COVID-19 (far Left) to Healthy
(normal case, far Right). From the graph, Normal (healthy) and Lung Opacity
images have a similar cluster formation and pixel intensity distribution.
36. Problem Objectives Learning Methodology Dataset Simulation Results Conclusion References
Simulation Environment
I NVIDIA 537K80s, T4s, P4s and P100s Graphic Processing Unit
(GPU)
I Keras API (TensorFlow)
I Google Colaboratory (Colab) (Python 3.8x kernel)
37. Problem Objectives Learning Methodology Dataset Simulation Results Conclusion References
Results
Metrics
Accuracy =
TP
TP + FP + TN + FN
Sensitivity (r/sn) =
TP
TP + FN
Specificity (sp) =
TN
TN + FP
Precision (p) =
TP
TP + FP
F1 Score =
2
1
r + 1
p
= 2
rp
r + p
FPR =
FP
FP + TN
= 1 − sp
38. Problem Objectives Learning Methodology Dataset Simulation Results Conclusion References
Results
0 20 40 60 80 100
Epoch
0.0
0.2
0.4
0.6
0.8
loss
VGG19 Loss
Train loss
Validation loss
(a)
0 20 40 60 80 100
Epoch
0.70
0.75
0.80
0.85
0.90
0.95
1.00
Accuracy
VGG19 Accuracy
Train accuracy
Validation accuracy
(b)
Figure 15: VGG-19 model was trained for 100 epochs.
39. Problem Objectives Learning Methodology Dataset Simulation Results Conclusion References
Results
Model Correct classification Incorrect classification
VGG-19 1988 127
DenseNet-121 1972 143
ResNet-50 1985 130
Table 1: A summary of total images classified correctly and incorrectly by
VGG-19, DenseNet-121, and ResNet-50 using a total test dataset of 2,115
images. Amongst the three models, VGG-19 demonstrated high accuracy of
XCR image classification with only 127 misclassifications.
45. Problem Objectives Learning Methodology Dataset Simulation Results Conclusion References
Results
References Dataset description Method Accuracy
[2]
5,184 chest X-ray images
that comprised 184 COVID-19 cases and 5000 normal cases
ResNet18 + ResNet50 +SqueezeNet
+ DenseNet-121
98%
[3]
18,567 X-ray images (COVID-19 = 140, normal = 8851
and Pneumonia = 9576)
ResNet-101 + ResNet-152 96.1%
[4] 320 images (COVID-19 = 160 and normal = 160)
Transfer learning with CNN
networks (Inceptionv3 and ResNet50)
99.01%
[5] 6926 images (COVID-19 = 2589 and normal = 4337) CNN 94.43%
[6] 5090 chest X-ray images (COVID-19 = 1979 and normal = 3111)
Fusion features (CNN+HOG)
+ VGG19 pre-train model
99.43%
Proposed
COVID-19 = 3616, Normal= 10192 ,
Lung Opacity = 6012, and Viral Pneumonia = 1345 images
ResNet-50V2
DenseNet-121
VGG-19
93.80%
93.24%
94.0%
Table 2: Comparative survey of literature results.
46. Problem Objectives Learning Methodology Dataset Simulation Results Conclusion References
Results
Figure 21: Activation map of ResNet-50 layer 48 before fine-tuning.
47. Problem Objectives Learning Methodology Dataset Simulation Results Conclusion References
Results
Figure 22: Activation map of ResNet-50 layer 48 after fine-tuning.
48. Problem Objectives Learning Methodology Dataset Simulation Results Conclusion References
Conclusion
I RT-PCR is error-prone and less accurate
I COVID-19 detection from Chest X-ray Images is a promising
diagnostic method.
I It is a fast, accurate and feasible solution especially for
asymptomatic carriers.
49. References
[1] Y. S. Abu-Mostafa, M. Magdon-Ismail, and H.-T. Lin, Learning
from data. AMLBook New York, NY, USA: 2012, vol. 4.
[2] S. Minaee, R. Kafieh, M. Sonka, S. Yazdani, and G. J. Soufi,
“Deep-covid: Predicting covid-19 from chest x-ray images using deep
transfer learning,” Medical image analysis, vol. 65, p. 101 794, 2020.
[3] N. Wang, H. Liu, and C. Xu, “Deep learning for the detection of
covid-19 using transfer learning and model integration,” in 2020
IEEE 10th International Conference on Electronics Information and
Emergency Communication (ICEIEC), IEEE, 2020, pp. 281–284.
[4] H. Benbrahim, H. Hachimi, and A. Amine, “Deep transfer learning
with apache spark to detect covid-19 in chest x-ray images,”
Romanian Journal of Information Science and Technology, vol. 23,
S117–S129, 2020.
[5] L. Duran-Lopez, J. P. Dominguez-Morales, J. Corral-Jaime,
S. Vicente-Diaz, and A. Linares-Barranco, “Covid-xnet: A custom
deep learning system to diagnose and locate covid-19 in chest x-ray
images,” Applied Sciences, vol. 10, no. 16, p. 5683, 2020.
50. References
[6] M. Ahsan, M. Based, J. Haider, M. Kowalski, et al., “Covid-19
detection from chest x-ray images using feature fusion and deep
learning,” Sensors, vol. 21, no. 4, p. 1480, 2021.