In this project, we deal with two different tasks on medical data related to COVID-19. In the first part, we aim to train a neural network capable of recognizing COVID-positive individuals using recordings of coughs. In the second part, a neural network trained to recognize COVID from chest X-rays is implemented. Various techniques are applied to improve the classifiers' performance, particularly using GANs to generate synthetic images of X-rays.
The document discusses efficient codebook design for image compression using vector quantization. It introduces data compression techniques, including lossless compression methods like dictionary coders and entropy coding, as well as lossy compression methods like scalar and vector quantization. Vector quantization maps vectors to codewords in a codebook to compress data. The LBG algorithm is described for generating an optimal codebook by iteratively clustering vectors and updating codebook centroids.
This document discusses data types and formats used in Hadoop MapReduce. It covers basic data types like IntWritable and Text that support serialization and comparability. It also describes common file formats like XML, JSON, SequenceFiles, Avro, Parquet, and how to implement custom formats like CSV. Input/output classes are discussed along with how different formats can be used in MapReduce jobs.
Finite state automata (deterministic and nondeterministic finite automata) provide decisions regarding the acceptance and rejection of a string while transducers provide some output for a given input. Thus, the two machines are quite useful in language processing tasks.
Digital watermarking allows users to embed special patterns or data into digital content like images, audio, and video without changing the perceptual quality. Watermarking helps protect copyright ownership by embedding information directly into the media itself through small changes to the content data. Watermarks can be invisible, inseparable from the content after processing, and do not change the file size. Watermarks are classified based on human perception (visible or invisible), robustness (fragile, semi-fragile, or robust), and the type of document (text, image, audio, or video). Frequency domain techniques like discrete cosine transformation are commonly used to embed watermarks in images and videos.
Dynamic programming is used to solve optimization problems by combining solutions to overlapping subproblems. It works by breaking down problems into subproblems, solving each subproblem only once, and storing the solutions in a table to avoid recomputing them. There are two key properties for applying dynamic programming: overlapping subproblems and optimal substructure. Some applications of dynamic programming include finding shortest paths, matrix chain multiplication, the traveling salesperson problem, and knapsack problems.
The document summarizes Junho Cho's presentation on image translation using generative adversarial networks (GANs). It discusses several papers on this topic, including pix2pix, which uses conditional GANs to perform supervised image-to-image translation on paired datasets; Domain Transfer Network (DTN), which uses an unsupervised method to perform cross-domain image generation; and CycleGAN and DiscoGAN, which can perform unpaired image-to-image translation using cycle-consistent adversarial networks. The presentation provides an overview of each method and shows examples of their applications to tasks such as semantic segmentation, style transfer, and domain adaptation.
An image histogram represents the distribution of pixel intensities in a digital image. It plots the number of pixels for each tonal value. Histograms can reveal if an image is under-exposed or over-exposed based on where most pixel values are concentrated. Histogram equalization improves contrast by spreading out pixel values across intensity levels. Local histogram equalization applies this within neighborhoods to enhance detail while preserving edges.
The document discusses efficient codebook design for image compression using vector quantization. It introduces data compression techniques, including lossless compression methods like dictionary coders and entropy coding, as well as lossy compression methods like scalar and vector quantization. Vector quantization maps vectors to codewords in a codebook to compress data. The LBG algorithm is described for generating an optimal codebook by iteratively clustering vectors and updating codebook centroids.
This document discusses data types and formats used in Hadoop MapReduce. It covers basic data types like IntWritable and Text that support serialization and comparability. It also describes common file formats like XML, JSON, SequenceFiles, Avro, Parquet, and how to implement custom formats like CSV. Input/output classes are discussed along with how different formats can be used in MapReduce jobs.
Finite state automata (deterministic and nondeterministic finite automata) provide decisions regarding the acceptance and rejection of a string while transducers provide some output for a given input. Thus, the two machines are quite useful in language processing tasks.
Digital watermarking allows users to embed special patterns or data into digital content like images, audio, and video without changing the perceptual quality. Watermarking helps protect copyright ownership by embedding information directly into the media itself through small changes to the content data. Watermarks can be invisible, inseparable from the content after processing, and do not change the file size. Watermarks are classified based on human perception (visible or invisible), robustness (fragile, semi-fragile, or robust), and the type of document (text, image, audio, or video). Frequency domain techniques like discrete cosine transformation are commonly used to embed watermarks in images and videos.
Dynamic programming is used to solve optimization problems by combining solutions to overlapping subproblems. It works by breaking down problems into subproblems, solving each subproblem only once, and storing the solutions in a table to avoid recomputing them. There are two key properties for applying dynamic programming: overlapping subproblems and optimal substructure. Some applications of dynamic programming include finding shortest paths, matrix chain multiplication, the traveling salesperson problem, and knapsack problems.
The document summarizes Junho Cho's presentation on image translation using generative adversarial networks (GANs). It discusses several papers on this topic, including pix2pix, which uses conditional GANs to perform supervised image-to-image translation on paired datasets; Domain Transfer Network (DTN), which uses an unsupervised method to perform cross-domain image generation; and CycleGAN and DiscoGAN, which can perform unpaired image-to-image translation using cycle-consistent adversarial networks. The presentation provides an overview of each method and shows examples of their applications to tasks such as semantic segmentation, style transfer, and domain adaptation.
An image histogram represents the distribution of pixel intensities in a digital image. It plots the number of pixels for each tonal value. Histograms can reveal if an image is under-exposed or over-exposed based on where most pixel values are concentrated. Histogram equalization improves contrast by spreading out pixel values across intensity levels. Local histogram equalization applies this within neighborhoods to enhance detail while preserving edges.
This document discusses morphological image processing using mathematical morphology. It begins with an introduction to morphology in biology and its application to image analysis using set theory. The key concepts of dilation, erosion, opening and closing are explained. Dilation expands object boundaries while erosion shrinks them. Opening performs erosion followed by dilation to smooth contours, and closing performs dilation followed by erosion to fill small holes. Structuring elements determine the shape and size of operations. Morphological operations are useful for tasks like boundary extraction, noise removal, and feature detection.
The document describes a simple code generator that generates target code for a sequence of three-address statements. It tracks register availability using register descriptors and variable locations using address descriptors. For each statement, it determines the locations of operands, copies them to a register if needed, performs the operation, updates the register and address descriptors, and stores values before procedure calls or basic block boundaries. It uses a getreg function to determine register allocation. Conditional statements are handled using compare and jump instructions and condition codes.
This document discusses spatial filtering methods for image processing. It defines spatial filtering as applying an operation within a neighborhood of pixels. Filters are classified as low-pass, high-pass, band-pass or band-reject depending on which frequencies they preserve or reject. Common linear spatial filtering methods are correlation and convolution. Smoothing filters like averaging and Gaussian blur reduce noise, while sharpening filters like unsharp masking and derivatives emphasize edges to enhance details.
Pushdown automata are computational models that extend finite automata with a stack, allowing them to recognize context-free languages. They consist of a finite state control unit, an input tape, and an infinite stack that supports push and pop operations, making them more powerful than finite state machines. Pushdown automata provide a way to implement context-free grammars similarly to how finite automata are used for regular grammars.
The document discusses the Laplacian of Gaussian (LoG) filter and how it can be used for edge detection and blob detection in images. The LoG filter applies a Gaussian blur to smooth the image, then takes the Laplacian to find zero-crossings, which indicate edges. It can also detect blobs by finding local extrema (maxima and minima) in the LoG filtered image. The scale of blobs detected depends on the sigma value used for the Gaussian blur. So the LoG filter acts as a band-pass filter, suppressing high and low frequencies to detect objects of a particular scale in the image.
The document discusses the structure of a Java program. A Java program contains classes, with one class containing a main method that acts as the starting point. Classes contain data members and methods that operate on the data. Methods contain declarations and executable statements. The structure also includes sections for documentation, package statements, import statements, interface statements, and class definitions, with the main method class being essential.
End to-end semi-supervised object detection with soft teacher ver.1.0taeseon ryu
[2021 ICCV SOTA Semi Supervised]
발표자료 : https://www.slideshare.net/taeseonryu/explaining-in-style-training-a-gan-to-explain-a-classifier-in-style-space
지금까지 발표한 논문 :https://github.com/Lilcob/-DL_PaperReadingMeeting
안녕하세요 딥러닝 논문읽기 모임입니다 오늘 업로드된 논문 리뷰 영상은 2021 ICCV 에서 발표된 'End-to-End Semi-Supervised Object Detection with Soft Teacher' 라는 제목의 논문입니다.
본 논문은 Semi Supervised 라는 학습 방식에 대해서 지금 SOTA를 달성한 논문입니다
Semi Supervised learning이라는 것은, labeled된 이미지와 unlabeled된 이미지를 섞어 학습하는 방법론이라고 이해해 주시면 될 것 같습니다. 오브젝트 디텍션의 프로젝트를 경험해 보신 분들이라면 다들 공감하시겠지만 labeling 에 엄청난 코스트가 소모가 됩니다. 이러한 문제를 해결하고자 Semi Supervised방식을 많이 채택을 하고 있습니다. semi, weakly의 개념부터 논문에대한 디테일한 리뷰까지,
이미지 처리팀 김병현님이 자세한 리뷰 도와주셨습니다.
Lempel-Ziv-Welch (LZW) is a universal lossless data compression algorithm that replaces strings of characters with single codes, achieving smaller file sizes and faster transmission. LZW is commonly used to compress files like TIFF, GIF, PDF, and in file compression formats like Unix Compress and gzip. It works by building a table of strings and assigning a code whenever it encounters a new string, allowing for efficient encoding of repeated patterns in data.
Edge detection aims to identify points where image brightness changes sharply. It is a fundamental step in image processing and computer vision. Edges define boundaries between regions and help with segmentation and object recognition. The Laplacian of Gaussian (LoG) operator is commonly used for edge detection, involving filtering an image with a Gaussian kernel followed by applying the Laplacian operator to find zero crossings. The standard deviation of the Gaussian determines which scales of detail are detected, with higher values detecting only stronger edges.
Intro to selective search for object proposals, rcnn family and retinanet state of the art model deep dives for object detection along with MAP concept for evaluating model and how does anchor boxes make the model learn where to draw bounding boxes
This slide deck is used as an introduction to Relational Algebra and its relation to the MapReduce programming model, as part of the Distributed Systems and Cloud Computing course I hold at Eurecom.
Course website:
http://michiard.github.io/DISC-CLOUD-COURSE/
Sources available here:
https://github.com/michiard/DISC-CLOUD-COURSE
Realistic image synthesis of COVID-19 chest X-rays using depthwise boundary ...IJECEIAES
Researchers in various related fields research preventing and controlling the spread of the coronavirus disease (COVID-19) virus. The spread of the COVID-19 is increasing exponentially and infecting humans massively. Preliminary detection can be observed by looking at abnormal conditions in the airways, thus allowing the entry of the virus into the patient's respiratory tract, which can be represented using computer tomography (CT) scan and chest X-ray (CXR) imaging. Particular deep learning approaches have been developed to classify COVID-19 CT or CXR images such as convolutional neural network (CNN), and deep convolutional neural network (DCNN). However, COVID-19 CXR dataset was measly opened and accessed. Particular deep learning method performance can be improved by augmenting the dataset amount. Therefore, the COVID-19 CXR dataset was possibly augmented by generating the synthetic image. This study discusses a fast and real-like image synthesis approach, namely depthwise boundary equilibrium generative adversarial network (DepthwiseBEGAN). DepthwiseBEGAN was reduced memory load 70.11% in training processes compared to the conventional BEGAN. DepthwiseBEGAN synthetic images were inspected by measuring the Fréchet inception distance (FID) score with the real-to-real score equal to 4.3866 and real-to-fake score equal to 4.4674. Moreover, generated DepthwiseBEGAN synthetic images improve 22.59% accuracy of conventional CNN models.
CXR-ACGAN: Auxiliary Classifier GAN for Conditional Generation of Chest X-Ray...Giorgio Carbone
CXR-ACGAN: Auxiliary Classifier GAN (AC-GAN) for Chest X-Ray (CXR) Images Generation (Pneumonia, COVID-19 and healthy patients) for the purpose of data augmentation. Implemented in TensorFlow, trained on COVIDx CXR-3 dataset.
Covid 19 detection using Transfer learning.pptxpriyaghate2
1) The document proposes an automatic system to classify chest X-ray images as coming from COVID-19 patients or healthy patients using transfer learning with convolutional neural networks.
2) The methodology involves using two datasets of chest X-ray images to extract features using CNNs like MobileNet and DenseNet. These features are then classified using models like SVM, MLP, etc.
3) The performance is evaluated based on metrics like accuracy, precision, sensitivity, and F1 score. Preliminary results show promising performance in classifying images as COVID-19 or healthy.
This document discusses detecting pneumonia in chest X-rays using deep learning techniques. It begins by stating that pneumonia is a major cause of death worldwide, especially in children. The objective is to develop a deep learning framework to automatically diagnose pneumonia from chest X-rays to reduce human error. Various deep learning models like CNN, VGG-16 and MobileNetV2 are implemented and compared on a public dataset. VGG-16 achieved the highest accuracy of 94.3% among the models for detecting pneumonia. The document concludes that pneumonia can be identified and classified using deep learning models with VGG-16 performing best.
Digital radiology involves digitally capturing and processing radiographic images. It has advantages over conventional film radiography like digital images can be processed, transmitted electronically, and archived. There are different types of digital detectors like computed radiography plates and flat panel detectors using indirect or direct conversion. Digital images use a pixel matrix and discrete grey levels rather than continuous analogue values. PACS and RIS systems are also part of the digital radiology department for image storage, retrieval and management of patient information.
An optimized deep learning architecture for the diagnosis of covid 19 disease...Aboul Ella Hassanien
This document presents an optimized deep learning architecture for diagnosing COVID-19 based on chest X-ray images. A binary COVID-19 dataset containing 99 positive and 207 negative X-ray images was created. A hybrid CNN architecture using DenseNet121 and optimized by the Gravitational Search Algorithm (GSA) was proposed. The GSA was used to determine optimal hyperparameters, achieving 98% accuracy. This approach provides an effective way to diagnose COVID-19 using X-rays alone or in combination with other tests.
Endobronchial Ultrasound Image Diagnosis Using Convolutional Neural Network (...Yan-Wei Lee
(1) The document describes a study that used a convolutional neural network to diagnose endobronchial ultrasound images. (2) The researchers fine-tuned a pre-trained CaffeNet model on 164 ultrasound images and extracted features from the fully connected layer to classify images with an SVM. (3) Their proposed method achieved 85.4% accuracy and outperformed conventional handcrafted feature extraction and classification.
The document discusses using a convolutional neural network (CNN) to classify chest x-ray images as having a diagnosable condition or not. It describes preprocessing the NIH chest x-ray dataset, performing a t-test to determine mean grayscale values differ between normal and abnormal images, developing and training a CNN model, and evaluating the model's performance. The CNN model achieved 63.13% validation accuracy, outperforming pre-trained ResNet50 and MobileNet models fine-tuned on the task. Future work involves further tuning and evaluating the CNN for classifying specific disease types.
This document presents a convolutional neural network model to detect pneumonia from chest x-ray images. The model is trained from scratch on a dataset of over 5,800 chest x-ray images categorized into pneumonia and normal images. The model uses preprocessing like resizing and normalization, data augmentation, and a custom sequential CNN model with convolutional and pooling layers to extract features and classify images. Evaluation metrics like precision, recall, accuracy and F1 score are used to analyze the trained model's performance at detecting pneumonia from chest x-rays. The proposed system aims to help diagnose pneumonia early and assist medical professionals, especially in remote areas.
The document describes using a convolutional neural network with the VGG16 architecture to classify lung cancer CT scan images into 4 classes: large cell carcinoma, squamous cell carcinoma, adenocarcinoma, and normal lungs. The model is trained on a dataset of 1000 CT scan images from Kaggle and achieves an AUC of 0.94, indicating high accuracy in identifying different types of lung cancer. This CNN model with pre-trained VGG16 weights provides an effective approach for classifying lung cancer images and could help enable early diagnosis and treatment of lung cancer.
This document discusses morphological image processing using mathematical morphology. It begins with an introduction to morphology in biology and its application to image analysis using set theory. The key concepts of dilation, erosion, opening and closing are explained. Dilation expands object boundaries while erosion shrinks them. Opening performs erosion followed by dilation to smooth contours, and closing performs dilation followed by erosion to fill small holes. Structuring elements determine the shape and size of operations. Morphological operations are useful for tasks like boundary extraction, noise removal, and feature detection.
The document describes a simple code generator that generates target code for a sequence of three-address statements. It tracks register availability using register descriptors and variable locations using address descriptors. For each statement, it determines the locations of operands, copies them to a register if needed, performs the operation, updates the register and address descriptors, and stores values before procedure calls or basic block boundaries. It uses a getreg function to determine register allocation. Conditional statements are handled using compare and jump instructions and condition codes.
This document discusses spatial filtering methods for image processing. It defines spatial filtering as applying an operation within a neighborhood of pixels. Filters are classified as low-pass, high-pass, band-pass or band-reject depending on which frequencies they preserve or reject. Common linear spatial filtering methods are correlation and convolution. Smoothing filters like averaging and Gaussian blur reduce noise, while sharpening filters like unsharp masking and derivatives emphasize edges to enhance details.
Pushdown automata are computational models that extend finite automata with a stack, allowing them to recognize context-free languages. They consist of a finite state control unit, an input tape, and an infinite stack that supports push and pop operations, making them more powerful than finite state machines. Pushdown automata provide a way to implement context-free grammars similarly to how finite automata are used for regular grammars.
The document discusses the Laplacian of Gaussian (LoG) filter and how it can be used for edge detection and blob detection in images. The LoG filter applies a Gaussian blur to smooth the image, then takes the Laplacian to find zero-crossings, which indicate edges. It can also detect blobs by finding local extrema (maxima and minima) in the LoG filtered image. The scale of blobs detected depends on the sigma value used for the Gaussian blur. So the LoG filter acts as a band-pass filter, suppressing high and low frequencies to detect objects of a particular scale in the image.
The document discusses the structure of a Java program. A Java program contains classes, with one class containing a main method that acts as the starting point. Classes contain data members and methods that operate on the data. Methods contain declarations and executable statements. The structure also includes sections for documentation, package statements, import statements, interface statements, and class definitions, with the main method class being essential.
End to-end semi-supervised object detection with soft teacher ver.1.0taeseon ryu
[2021 ICCV SOTA Semi Supervised]
발표자료 : https://www.slideshare.net/taeseonryu/explaining-in-style-training-a-gan-to-explain-a-classifier-in-style-space
지금까지 발표한 논문 :https://github.com/Lilcob/-DL_PaperReadingMeeting
안녕하세요 딥러닝 논문읽기 모임입니다 오늘 업로드된 논문 리뷰 영상은 2021 ICCV 에서 발표된 'End-to-End Semi-Supervised Object Detection with Soft Teacher' 라는 제목의 논문입니다.
본 논문은 Semi Supervised 라는 학습 방식에 대해서 지금 SOTA를 달성한 논문입니다
Semi Supervised learning이라는 것은, labeled된 이미지와 unlabeled된 이미지를 섞어 학습하는 방법론이라고 이해해 주시면 될 것 같습니다. 오브젝트 디텍션의 프로젝트를 경험해 보신 분들이라면 다들 공감하시겠지만 labeling 에 엄청난 코스트가 소모가 됩니다. 이러한 문제를 해결하고자 Semi Supervised방식을 많이 채택을 하고 있습니다. semi, weakly의 개념부터 논문에대한 디테일한 리뷰까지,
이미지 처리팀 김병현님이 자세한 리뷰 도와주셨습니다.
Lempel-Ziv-Welch (LZW) is a universal lossless data compression algorithm that replaces strings of characters with single codes, achieving smaller file sizes and faster transmission. LZW is commonly used to compress files like TIFF, GIF, PDF, and in file compression formats like Unix Compress and gzip. It works by building a table of strings and assigning a code whenever it encounters a new string, allowing for efficient encoding of repeated patterns in data.
Edge detection aims to identify points where image brightness changes sharply. It is a fundamental step in image processing and computer vision. Edges define boundaries between regions and help with segmentation and object recognition. The Laplacian of Gaussian (LoG) operator is commonly used for edge detection, involving filtering an image with a Gaussian kernel followed by applying the Laplacian operator to find zero crossings. The standard deviation of the Gaussian determines which scales of detail are detected, with higher values detecting only stronger edges.
Intro to selective search for object proposals, rcnn family and retinanet state of the art model deep dives for object detection along with MAP concept for evaluating model and how does anchor boxes make the model learn where to draw bounding boxes
This slide deck is used as an introduction to Relational Algebra and its relation to the MapReduce programming model, as part of the Distributed Systems and Cloud Computing course I hold at Eurecom.
Course website:
http://michiard.github.io/DISC-CLOUD-COURSE/
Sources available here:
https://github.com/michiard/DISC-CLOUD-COURSE
Realistic image synthesis of COVID-19 chest X-rays using depthwise boundary ...IJECEIAES
Researchers in various related fields research preventing and controlling the spread of the coronavirus disease (COVID-19) virus. The spread of the COVID-19 is increasing exponentially and infecting humans massively. Preliminary detection can be observed by looking at abnormal conditions in the airways, thus allowing the entry of the virus into the patient's respiratory tract, which can be represented using computer tomography (CT) scan and chest X-ray (CXR) imaging. Particular deep learning approaches have been developed to classify COVID-19 CT or CXR images such as convolutional neural network (CNN), and deep convolutional neural network (DCNN). However, COVID-19 CXR dataset was measly opened and accessed. Particular deep learning method performance can be improved by augmenting the dataset amount. Therefore, the COVID-19 CXR dataset was possibly augmented by generating the synthetic image. This study discusses a fast and real-like image synthesis approach, namely depthwise boundary equilibrium generative adversarial network (DepthwiseBEGAN). DepthwiseBEGAN was reduced memory load 70.11% in training processes compared to the conventional BEGAN. DepthwiseBEGAN synthetic images were inspected by measuring the Fréchet inception distance (FID) score with the real-to-real score equal to 4.3866 and real-to-fake score equal to 4.4674. Moreover, generated DepthwiseBEGAN synthetic images improve 22.59% accuracy of conventional CNN models.
CXR-ACGAN: Auxiliary Classifier GAN for Conditional Generation of Chest X-Ray...Giorgio Carbone
CXR-ACGAN: Auxiliary Classifier GAN (AC-GAN) for Chest X-Ray (CXR) Images Generation (Pneumonia, COVID-19 and healthy patients) for the purpose of data augmentation. Implemented in TensorFlow, trained on COVIDx CXR-3 dataset.
Covid 19 detection using Transfer learning.pptxpriyaghate2
1) The document proposes an automatic system to classify chest X-ray images as coming from COVID-19 patients or healthy patients using transfer learning with convolutional neural networks.
2) The methodology involves using two datasets of chest X-ray images to extract features using CNNs like MobileNet and DenseNet. These features are then classified using models like SVM, MLP, etc.
3) The performance is evaluated based on metrics like accuracy, precision, sensitivity, and F1 score. Preliminary results show promising performance in classifying images as COVID-19 or healthy.
This document discusses detecting pneumonia in chest X-rays using deep learning techniques. It begins by stating that pneumonia is a major cause of death worldwide, especially in children. The objective is to develop a deep learning framework to automatically diagnose pneumonia from chest X-rays to reduce human error. Various deep learning models like CNN, VGG-16 and MobileNetV2 are implemented and compared on a public dataset. VGG-16 achieved the highest accuracy of 94.3% among the models for detecting pneumonia. The document concludes that pneumonia can be identified and classified using deep learning models with VGG-16 performing best.
Digital radiology involves digitally capturing and processing radiographic images. It has advantages over conventional film radiography like digital images can be processed, transmitted electronically, and archived. There are different types of digital detectors like computed radiography plates and flat panel detectors using indirect or direct conversion. Digital images use a pixel matrix and discrete grey levels rather than continuous analogue values. PACS and RIS systems are also part of the digital radiology department for image storage, retrieval and management of patient information.
An optimized deep learning architecture for the diagnosis of covid 19 disease...Aboul Ella Hassanien
This document presents an optimized deep learning architecture for diagnosing COVID-19 based on chest X-ray images. A binary COVID-19 dataset containing 99 positive and 207 negative X-ray images was created. A hybrid CNN architecture using DenseNet121 and optimized by the Gravitational Search Algorithm (GSA) was proposed. The GSA was used to determine optimal hyperparameters, achieving 98% accuracy. This approach provides an effective way to diagnose COVID-19 using X-rays alone or in combination with other tests.
Endobronchial Ultrasound Image Diagnosis Using Convolutional Neural Network (...Yan-Wei Lee
(1) The document describes a study that used a convolutional neural network to diagnose endobronchial ultrasound images. (2) The researchers fine-tuned a pre-trained CaffeNet model on 164 ultrasound images and extracted features from the fully connected layer to classify images with an SVM. (3) Their proposed method achieved 85.4% accuracy and outperformed conventional handcrafted feature extraction and classification.
The document discusses using a convolutional neural network (CNN) to classify chest x-ray images as having a diagnosable condition or not. It describes preprocessing the NIH chest x-ray dataset, performing a t-test to determine mean grayscale values differ between normal and abnormal images, developing and training a CNN model, and evaluating the model's performance. The CNN model achieved 63.13% validation accuracy, outperforming pre-trained ResNet50 and MobileNet models fine-tuned on the task. Future work involves further tuning and evaluating the CNN for classifying specific disease types.
This document presents a convolutional neural network model to detect pneumonia from chest x-ray images. The model is trained from scratch on a dataset of over 5,800 chest x-ray images categorized into pneumonia and normal images. The model uses preprocessing like resizing and normalization, data augmentation, and a custom sequential CNN model with convolutional and pooling layers to extract features and classify images. Evaluation metrics like precision, recall, accuracy and F1 score are used to analyze the trained model's performance at detecting pneumonia from chest x-rays. The proposed system aims to help diagnose pneumonia early and assist medical professionals, especially in remote areas.
The document describes using a convolutional neural network with the VGG16 architecture to classify lung cancer CT scan images into 4 classes: large cell carcinoma, squamous cell carcinoma, adenocarcinoma, and normal lungs. The model is trained on a dataset of 1000 CT scan images from Kaggle and achieves an AUC of 0.94, indicating high accuracy in identifying different types of lung cancer. This CNN model with pre-trained VGG16 weights provides an effective approach for classifying lung cancer images and could help enable early diagnosis and treatment of lung cancer.
Computer‐Aided Diagnosis of Breast Cancer Using Ensemble Convolutional Neural...Yan-Wei Lee
This document presents research on using convolutional neural networks (CNNs) for computer-aided diagnosis of breast cancer. The researchers trained multiple CNN models - including VGGNet, ResNet and DenseNet - on ultrasound images of breast tumors segmented from the original images. They evaluated the models' performance on three datasets and found that ensembling the predictions from several CNNs using weighted averaging achieved better diagnostic accuracy than using a single CNN model alone. The researchers conclude that tumor shape provides important diagnostic information and that ensemble learning is effective for computer-aided breast cancer diagnosis from ultrasound images.
Employing deep learning for lung sounds classificationIJECEIAES
This document summarizes research on classifying lung sounds using deep learning models. It presents two models: 1) A convolutional neural network (CNN) model developed from scratch to classify lung sound spectrogram images into six classes with an accuracy of 91%. 2) A transfer learning approach using the pre-trained AlexNet network on the same dataset, which achieved a higher accuracy of 94%. A comparison to prior research achieving 80% accuracy shows that the transfer learning approach was more effective for lung sound classification. The document concludes that transfer learning is an effective method for classification when datasets are small.
The document provides an overview of computed tomography (CT) image reconstruction. It explains that after transmission measurements are taken by detectors, the data is sent to a computer for processing using reconstruction algorithms. The most widely used algorithm is filtered backprojection, which builds up the CT image by essentially reversing the data acquisition steps and smearing attenuation values along ray paths in the image. This reinforces areas of similar attenuation, reconstructing the image matrix. The document also discusses CT number scales, windowing, and other techniques for manipulating and displaying the reconstructed CT image.
Enhancing Pneumonia Detection: A Comparative Study of CNN, DenseNet201, and V...IRJET Journal
This document presents a comparative study evaluating the performance of three deep learning models - a custom CNN, DenseNet201, and VGG16 - in classifying chest X-ray images to detect pneumonia. The CNN model achieved the best performance with 80% accuracy, comparable to human radiologists. A chest X-ray dataset was used to train and evaluate the models. VGG16 consistently outperformed the other models, though all models showed potential for improving pneumonia diagnosis through rapid and accurate analysis of medical images using deep learning techniques.
L3-Net Deep Audio Embeddings to Improve COVID-19 Detection from Smartphone DataMattia Campana
Presentation of the paper entitled "L3-Net Deep Audio Embeddings to Improve COVID-19 Detection from SMartphones Data" at SMARTCOMP 2022, Aalto University, Finland.
Lung Cancer Detection Using Convolutional Neural NetworkIRJET Journal
This document describes a study that uses a convolutional neural network (CNN) to classify lung cancer in CT scans. The CNN model is trained on a dataset of 1018 patient CT scans containing annotations of lung nodules as benign or malignant. The CNN architecture includes convolution layers to extract features, max pooling layers to reduce computations, dropout layers to prevent overfitting, and fully connected layers to classify scans. The model achieves a 65% accuracy on the training set at detecting cancer in new CT scans. The CNN is integrated into a web application to allow doctors to efficiently analyze scans for lung cancer.
The document discusses progressive decision trees, which aim to overcome some limitations of classical decision trees. Progressive decision trees break the classification problem into a sequence of simpler sub-problems using small decision trees. Three types of cascading progressive decision trees are described (Type A, B, C) which differ in how information is passed between trees. Experimental results on document layout recognition, hyperspectral imaging, brain tumour classification, and UCI datasets show that progressive decision trees can improve accuracy and reduce costs compared to single decision trees. Further research opportunities in progressive decision trees are also outlined.
Build applications with generative AI on Google CloudMárton Kodok
We will explore Vertex AI - Model Garden powered experiences, we are going to learn more about the integration of these generative AI APIs. We are going to see in action what the Gemini family of generative models are for developers to build and deploy AI-driven applications. Vertex AI includes a suite of foundation models, these are referred to as the PaLM and Gemini family of generative ai models, and they come in different versions. We are going to cover how to use via API to: - execute prompts in text and chat - cover multimodal use cases with image prompts. - finetune and distill to improve knowledge domains - run function calls with foundation models to optimize them for specific tasks. At the end of the session, developers will understand how to innovate with generative AI and develop apps using the generative ai industry trends.
Enhanced data collection methods can help uncover the true extent of child abuse and neglect. This includes Integrated Data Systems from various sources (e.g., schools, healthcare providers, social services) to identify patterns and potential cases of abuse and neglect.
We are pleased to share with you the latest VCOSA statistical report on the cotton and yarn industry for the month of March 2024.
Starting from January 2024, the full weekly and monthly reports will only be available for free to VCOSA members. To access the complete weekly report with figures, charts, and detailed analysis of the cotton fiber market in the past week, interested parties are kindly requested to contact VCOSA to subscribe to the newsletter.
We are pleased to share with you the latest VCOSA statistical report on the cotton and yarn industry for the month of May 2024.
Starting from January 2024, the full weekly and monthly reports will only be available for free to VCOSA members. To access the complete weekly report with figures, charts, and detailed analysis of the cotton fiber market in the past week, interested parties are kindly requested to contact VCOSA to subscribe to the newsletter.
Codeless Generative AI Pipelines
(GenAI with Milvus)
https://ml.dssconf.pl/user.html#!/lecture/DSSML24-041a/rate
Discover the potential of real-time streaming in the context of GenAI as we delve into the intricacies of Apache NiFi and its capabilities. Learn how this tool can significantly simplify the data engineering workflow for GenAI applications, allowing you to focus on the creative aspects rather than the technical complexities. I will guide you through practical examples and use cases, showing the impact of automation on prompt building. From data ingestion to transformation and delivery, witness how Apache NiFi streamlines the entire pipeline, ensuring a smooth and hassle-free experience.
Timothy Spann
https://www.youtube.com/@FLaNK-Stack
https://medium.com/@tspann
https://www.datainmotion.dev/
milvus, unstructured data, vector database, zilliz, cloud, vectors, python, deep learning, generative ai, genai, nifi, kafka, flink, streaming, iot, edge
Medical data management: COVID-19 detection using cough recordings, chest X-rays classification and generation
1. MEDICAL DATA MANAGEMENT:
COVID-19 DETECTION USING COUGH
RECORDINGS,
CHEST X-RAYS CLASSIFICATION AND
GENERATION
University of Milano-Bicocca
Master's Degree in Data Science
Digital Signal and Image Management
Academic Year 2022-2023
Authors:
Giorgio CARBONE matricola n.
811974
Gianluca CAVALLARO matricola n.
826049 Remo MARCONZINI matricola
n. 883256
3. Dataset:
Crowdsource dataset
Recordings collected between April 1st, 2020 and
December 1st, 2020
34,434 recordings and their metadata
• One .json for each recording
• One .csv file containing all metadata
Most relevant attributes
• uuid → Name of the recording
• cough_detected → Probability of being cough
sound
• status → Self-reported health condition
uuid 00039425-7f3a-42aa-ac13-
834aaa2b6b92
document 2020-04-
13T21:30:59.801831+00:00
cough_detected 0.9754
age [0, …, 99, NaN]
gender [Male, Female, NaN]
respiratory_condition [True, False, NaN]
fever_muscle_pain [True, False, NaN]
status [Healthy, Symptomatic,
COVID-19, NaN]
4. Data Cleaning
Removing rows with unknown status
Filter for recordings with cough_detected > 0.8
• Value recommended by the authors
Number of recordings after cleaning: 12119
Recordings distribution:
• Healthy: 9631
• Symptomatic: 2622
• COVID-19: 634
The dataset is imbalanced
N° recordings
Healthy 9167
Symptomatic 2339
COVID-19 613
Total 12119
5. Preprocessing
Noise reduction
• Spectral gating using noisereduce
Silence removal
• To maintain only relevant audio patterns
• Silence > 1s is removed
• 0.5s of silence maintained at the beginning
and the end of the recording
Length standardization
• Need for a fixed dimensions of the audio
features
• Trade-off between information loss and amount
of sparse values
Duration N° recordings
< 2s 1439
<3s 3461
< 4s 5826
< 5s 7892
< 6s 9468
< 7s 10680
< 8s 11470
< 9s 11941
8. Class imbalance problem
Binary classification problem
• COVID-19 Positive vs. COVID-19 Negative
• 613 recordings vs. 11506 recordings
Data augmentation to deal with class imbalance
• Generation of synthetic audio tracks
belonging to the minority class
Data augmentation on raw signal
• Time Stretch
• Pitch Shift
• Shift
• Gain
N° recordings
Healthy
Negative
9167
11506
Symptomatic 2339
COVID-19 Positive 613 613
Total 12119
10. Feature extraction
Cough sounds contain more energy in lower
frequencies
MFCCs are a suitable representation for
cough recordings
• 15 MFCCs per frame
Audio samples have a duration of 6 seconds
• MFCC matrices 15x259
Also MFCC-∆ and MFCC-∆∆ were considered
• Features dimension 3x15x259
12. Training & Results
Standard procedure with augmentation only on
training set:
• Balanced training set (positive:negative =
1:3)
• Unbalanced validation and test set
Terrible results for validation and test set
The model don’t recognize actual positive
recordings
Loss Accuracy Precision
Val Test Val Test Val Test
3.80 3.81 0.91 0.89 0.07 0.04
Recall AUC
Val Test Val Test
0.07 0.05 0.48 0.52
Confusion matrix on test set
13. Training & Results
Procedure followed in various papers:
• Data augmentation on full dataset, before
splitting
Much better performances
Questions:
• Is the classifier recognizing the positives
or the augmented audio?
• Is this approach reliable in evaluating real
audio?
Loss Accuracy Precision
Val Test Val Test Val Test
0.42 0.41 0.94 0.94 0.96 0.95
Recall AUC
Val Test Val Test
0.79 0.81 0.91 0.92
Confusion matrix on test set
15. Dataset: COVIDx CXR-3
Create by COVID-NET team
8 different data sources
Last release: 06/02/2022
2 different datasets:
Training Set
Test Set
3 classes: COVID-19, Pneumonia, Normal
Two .txt file (train, test) containing metadata
• Patient ID
• File name
• Class
• Data Source
Patient ID 101
filename pneumocystis-jirovecii-
pneumonia-3-1.jpg
class pneumonia
Data source cohen
16. Data exploration
Training set: 29.404 CXR images:
COVID-19: 15.774 images
Normal (no pathology) : 8.085 images
Pneumonia: 5.545 images
Test set: 400 CXR images:
COVID-19: 200 images
Normal (no pathology) : 100 images
Pneumonia: 100 images
The dataset is imbalanced
Training Set Distribution
Test Set distribution
17. Images Exploration
CXR «Normal»
CXR «COVID-19»
CXR «Pneumonia»
Images are 1024x1024 pixels with 3 channel:
Only Posterior-Anterior (PA) CXR
Many images contain:
Noise
Undesirable parts
Preliminary operations:
Resized to 112x122x3
Reduced computational cost
Data Splitted
Data Normalization
18. Image Pre-Processing
Image Enhancement:
Techniques used to improve the information
interpretability in images
For radiologists and automated systems
Pre-Processing
Removal of textual information commonly
embedded in CXR images
Noisy CXR-image
Common textual items
19. Improved Adaptive Gamma Correction
Adaptive Gamma Correction tool
AGC (Adaptive Gamma Correction) is a tool
for image contrast
AGC relates the gamma parameter with the
cumulative distribution function (CDF) of
the pixel gray levels
good for most dimmed images, but fails for
globally bright images
Improved Adaptive Gamma Correction
new AGC algorithm
enhance bright images with the use of
negative images
enhance dimmed images with the use of gamma
correction modulated by truncated CDF
Flowchart of Improved AGC tool
21. Pre-Processing:
The chest CXR images were cropped
top 8% of the image
Commonly embedded textual information
Central crop
To Centre the cropped image
Some pre-processing examples
22. Class imbalance problem
Different techniques explored to handle
unbalanced classes
Under-sampling of the dataset
Rebalancing with respect to the least
populated class
Class-weights
Assigns higher weights to samples from
underrepresented classes
Over-sampling of the dataset
Data augmentation on minority classes
Positional-based Data Augmentation
GAN
Classes Nr. images
COVID-19 15.774
Pneumonia 5.545
Normal 8.085
Total 29.904
23. Data Augmentation
A data augmentation technique was adopted to
balance the classes, in particular was:
Implemented after under-sampling (performing
it on all classes)
Implemented to increase minority classes (not
performing it on the most populated class)
Data augmentation was exploited with the
following types of augmentation:
Translation (± 10% in x and y directions)
Rotation (± 10)
Horizontal flip, zoom (± 15%)
Intensity shift (± 10%)
Some augmentation examples
26. Over-Sampling wPositional Augmentation Results
The solution that produced the best results
turned out to be the one:
without preprocessing
and Over-Sampling of minority classes with
positional augmentation
Confusion matrix on
test set
29. Explainable AI: Class activation Heat-Map
We developed an explainability algorithm based on the use of Gradient-weighted
Class Activation Mapping (Grad-CAM)
It provides a visual output of the most interesting areas found by the proposed
CNN models
Grad-CAM uses the gradients of any target concept, flowing into the final
convolutional layer to produce a coarse localization map highlighting the
important regions in the image for predicting the concept.
COVID-19 CXR, Activation
Map
Pneumonia CXR, Activation
Map
31. Conditional Generation of Synthetic Chest X-Ray Images
Objectives:
Train an AC-GAN to synthesize chest x-rays
images
Conditional generation of healthy, covid-
19 and pneumonia patients x-rays
Data augmentation on the class-imbalanced
COVIDx dataset to improve classification
performances
Dataset → COVIDx
Simple image pre-processing →112𝑥112
resizing and [0,1] pixel scaling
Data augmentation → shearing and zooming
Normal
COVID-19
Pneumonia
32. Auxiliary Classifier Generative Adversarial Network (AC-GAN)
AC-GAN → extension of the GAN architecture
The generator is class conditional as with
cGANs
Input → randomly sampled 100-dimensional
noise vector and a label,
Output → conditionally generating a
112x112x3 image
The classes → coded by integers (0,1,2).
The discriminator → comes with an auxiliary
classifier
trained to reconstruct the input image
class label.
Input → 112x112x3 image (real or
synthesised)
Output → predicts its source (real/fake)
and class (0,1,2)
33. 1. Two inputs:
1. random 100-dimensional noise vector
2. integer class label c (0, 1, 2)
2. Class label → embedding layer → dense layer → 7
× 7 × 1
3. Noise vector → dense layer → 7 × 7 × 1024
4. These two tensors are then concatenated → 7 × 7 ×
1025
5. Four transposed convolutional layers (kernel size
= 5, stride = 2) → 112 × 112 × 3
• The first three are paired with batch
normalization and a Rectified Linear Unit
(ReLU) activation
• Last one with tanh activation
6. Output: fake image with size 112 × 112 × 3
Generator Noise Vector
100
Clas
s
Labe
l
1
Embedding 100
Dense 7 * 7 7 x 7 x 1
Dense 7 * 7 *
1024
7 x 7 x
1024
ReLU
Reshape
C 7 x 7 x
1025
14 x 14
x512
5x5 Conv2DTranspose
Batch Normalization
ReLU
28 x 28 x
256
5x5 Conv2DTranspose
Batch Normalization
ReLU
56 x 56 x
128
5x5 Conv2DTranspose
Tanh Activation
112 x 112
x 3
Fake image
112 x 112
x 3
𝑁(𝜇 = 0, 𝜎 = 0.02)
Params:
22,303,108
Trainable:
22,301,316
Non-trainable:
1,792
34. Discriminator
1. Input: 112 × 112 × 3 image → dataset (real) or
synthetic (fake)
2. Four blocks:
Sequence of: convolutional layer, batch
normalization layer, LeakyReLU activation
(slope = 0.2) and dropout layer (p = 0.5).
Image size: 112 × 112 × 3 → 7 × 7 × 512
3. The tensor is flattened → fed into two dense
layers
4. First dense layer + sigmoid activation
Binary classifier → outputs a probability
indicating whether the image is from the
original dataset (as "real") or generated by
the generator (as "fake").
5. Second dense layer + softmax activation
Multiclass classifier → outputs a 1D tensor of
probabilities of each class
Real / Fake
Image
112 x 112 x 3
Input Layer
3x3 Conv2D (stride 2)
Batch Normalization
LeakyReLU
Dropout
3x3 Conv2D (stride 2)
Batch Normalization
LeakyReLU
Dropout
56 x 56 x
64
28 x 28 x
128
3x3 Conv2D
(stride 2)
Batch
Normalization
LeakyReLU
Dropout
14 x 14 x
256
112 x 112
x 3
7 x 7 x
512
Flatten
25088
Dense 1 Dense 3
Auxiliary
Source
Sigmoid
Activation
Softmax
Activation
COVID-19 0
NORMAL 1
PNEUMONIA 2
FAKE 0 / REAL 1
Params:
1,672,900
Trainable:
1,670,916
Non-trainable:
1,984
35. Training and regularization
Adam optimizer → both the generator and the
discriminator
Two loss functions, one for each output layer of the
discriminator
First output layer → binary cross-entropy loss
(source loss 𝑳𝒔)
Second output layer → sparse categorical cross
entropy (auxiliary classifier loss 𝑳𝒄)
Minimize the overall loss 𝑳 = 𝑳𝒔 + 𝑳𝒄 → during the
generator training as well as the discriminator
training
Label flipping (generator training) → all the fake
(0) images generated are passed to discriminator
labelled as real (1)
Labels smoothing (discriminator training) → applied to
the binary vectors describing the origin of the image
(0/real – 1/fake) as a regularization method
Parameters Value
Max Epoch 388
Optimizer Adam
Learning rate 0.0002 (fixed)
Adam 𝜷𝟏 0.5 (fixed)
Batch Size 64
Steps per epoch 460
36. Auxiliary Loss 𝑳𝒄
Source Loss 𝑳𝒔 Total Loss 𝑳
Training
Testing
Discriminator
Discriminat
or
Generator
Overall
Accuracy
Real
Accuracy
Fake Accuracy
37. Choosing the best AC-GAN model weights for data augmentation
1. First set of models selection based on:
↑ visual quality qualitative evaluation of
sample images generated during each epoch
↓ generator losses
↓ discriminator accuracy in correctly
classifying fake images as fake.
2. Trained a classifier on synthetic images only →
evaluated the classification accuracy on real
COVIDx images
epoch 288 → best model
3. Generated Images Quality Evaluation
↓ FID, ↓ Intra-FID and ↑ Inception Score (IS) →
InceptionV3
4. 2D t-SNE embedding visualization of generated and
real images
38. Evaluation
Metric Value
Generator loss 𝑳 0.44
Discriminator
accuracy (fake
images)
0.13
Qualitative
appearance
Realistic
CNN Accuracy (on
real images)
0.63
Real t-
SNE
Synthetic t-
SNE
Our AC-GAN Paper AC-
GAN [6]
IS ↑ 2.71 (±
1.70)
2.51 (±
0.12)
FID ↓ 123.26 (±
0.02)
50.67 (±
8.13)
Intra
FID ↓
136 (±
0.02)
39. Real and Synthetic chest x-ray sample
Normal
Pneumonia
COVID-19
Real Fake
40. / Bibliography
1. Fakhry, A., Jiang, X., Xiao, J., Chaudhari, G., Han, A., & Khanzada, A. (2021). Virufy: A
multi-branch deep learning network for automated detection of COVID-19.
2. Hamdi, S., Oussalah, M., Moussaoui, A., & Saidi, M. (2022). Attention-based hybrid CNN-LSTM and
spectral data augmentation for COVID-19 diagnosis from cough sound. Journal of Intelligent
Information Systems, 59(2), 367-389.
3. Mahanta, S. K., Kaushik, D., Van Truong, H., Jain, S., & Guha, K. (2021, December). Covid-19
diagnosis from cough acoustics using convnets and data augmentation. In 2021 First
International Conference on Advances in Computing and Future Communication Technologies
(ICACFCT) (pp. 33-38). IEEE.
4. COUGHVID: A cough based COVID-19 fast screening project. https://c4science.ch/diffusion/10770/
5. Orlandic, L., Teijeiro, T., & Atienza, D. (2021). The COUGHVID crowdsourcing dataset, a corpus
for the study of large-scale cough analysis algorithms. Scientific Data, 8(1), 156.
6. Odena, A., Olah, C., & Shlens, J. (2017). Conditional Image Synthesis With Auxiliary Classifier
GANs (arXiv:1610.09585). arXiv. https://doi.org/10.48550/arXiv.1610.09585
7. Christi Florence, C. (2021). Detection of Pneumonia in Chest X-Ray Images Using Deep Transfer
Learning and Data Augmentation With Auxiliary Classifier Generative Adversarial Network. 14.
41. / Bibliography
8. Karbhari, Y., Basu, A., Geem, Z. W., Han, G.-T., & Sarkar, R. (2021). Generation of Synthetic
Chest X-ray Images and Detection of COVID-19: A Deep Learning Based Approach. Diagnostics,
11(5), Article 5. https://doi.org/10.3390/diagnostics11050895.
9. DeVries, T., Romero, A., Pineda, L., Taylor, G. W., & Drozdzal, M. (2019). On the Evaluation of
Conditional GANs (arXiv:1907.08175). arXiv. https://doi.org/10.48550/arXiv.1907.08175
10.Borji, A. (2018). Pros and Cons of GAN Evaluation Measures (arXiv:1802.03446). arXiv.
https://doi.org/10.48550/arXiv.1802.03446
11.Goel S, Kipp A, Goel N, et al. (November 22, 2022) COVID-19 vs. Influenza: A Chest X-ray
Comparison. Cureus 14(11): e31794. doi:10.7759/cureus.31794
12.Kim, S.-H.; Wi, Y.M.; Lim, S.; Han, K.-T.; Bae, I.-G. Differences in Clinical Characteristics
and Chest Images between Coronavirus Disease 2019 and Influenza-Associated Pneumonia.
Diagnostics 2021, 11, 261. https://doi.org/10.3390/ diagnostics11020261
42. / Bibliography
13.Wang, L., Lin, Z.Q. & Wong, A. COVID-Net: a tailored deep convolutional neural network design
for detection of COVID-19 cases from chest X-ray images. Sci Rep 10, 19549 (2020).
https://doi.org/10.1038/s41598-020-76550-z
14.Gang Cao, Lihui Huang, Huawei Tian, Xianglin Huang, Yongbin Wang, Ruicong Zhi, Contrast
enhancement of brightness-distorted images by improved adaptive gamma correction, Computers &
Electrical Engineering, Volume 66, 2018, Pages 569-582, ISSN 0045-7906,
https://doi.org/10.1016/j.compeleceng.2017.09.012.
15.Ait Nasser, A.; Akhloufi, M.A. A Review of Recent Advances in Deep Learning Models for Chest
Disease Detection Using Radiography. Diagnostics 2023, 13, 159.
https://doi.org/10.3390/diagnostics13010159
16.Huang, W., Song, G., Li, M., Hu, W., Xie, K. (2013). Adaptive Weight Optimization for
Classification of Imbalanced Data. In: Sun, C., Fang, F., Zhou, ZH., Yang, W., Liu, ZY. (eds)
Intelligence Science and Big Data Engineering. IScIDE 2013. Lecture Notes in Computer Science,
vol 8261. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-42057-3_69
17.Elshennawy, N.M.; Ibrahim, D.M. Deep-Pneumonia Framework Using Deep Learning Models Based on
Chest X-Ray Images. Diagnostics 2020, 10, 649. https://doi.org/10.3390/diagnostics10090649
18.Chetoui, M.; Akhloufi, M.A.; Yousefi, B.; Bouattane, E.M. Explainable COVID-19 Detection on
Chest X-rays Using an End-to-End Deep Convolutional Neural Network Architecture. Big Data Cogn.