Researchers in various related fields research preventing and controlling the spread of the coronavirus disease (COVID-19) virus. The spread of the COVID-19 is increasing exponentially and infecting humans massively. Preliminary detection can be observed by looking at abnormal conditions in the airways, thus allowing the entry of the virus into the patient's respiratory tract, which can be represented using computer tomography (CT) scan and chest X-ray (CXR) imaging. Particular deep learning approaches have been developed to classify COVID-19 CT or CXR images such as convolutional neural network (CNN), and deep convolutional neural network (DCNN). However, COVID-19 CXR dataset was measly opened and accessed. Particular deep learning method performance can be improved by augmenting the dataset amount. Therefore, the COVID-19 CXR dataset was possibly augmented by generating the synthetic image. This study discusses a fast and real-like image synthesis approach, namely depthwise boundary equilibrium generative adversarial network (DepthwiseBEGAN). DepthwiseBEGAN was reduced memory load 70.11% in training processes compared to the conventional BEGAN. DepthwiseBEGAN synthetic images were inspected by measuring the Fréchet inception distance (FID) score with the real-to-real score equal to 4.3866 and real-to-fake score equal to 4.4674. Moreover, generated DepthwiseBEGAN synthetic images improve 22.59% accuracy of conventional CNN models.
One reliable way of detecting coronavirus disease 2019 (COVID-19) is using a chest x-ray image due to its complications in the lung parenchyma. This paper proposes a solution for COVID-19 detection in chest x-ray images based on a convolutional neural network (CNN). This CNN-based solution is developed using a modified InceptionV3 as a backbone architecture. Selfattention layers are inserted to modify the backbone such that the number of trainable parameters is reduced and meaningful areas of COVID-19 in chest x-ray images are focused on a training process. The proposed CNN architecture is then learned to construct a model to classify COVID-19 cases from non-COVID-19 cases. It achieves sensitivity, specificity, and accuracy values of 93%, 96%, and 96%, respectively. The model is also further validated on the so-called other normal and abnormal, which are nonCOVID-19 cases. Cases of other normal contain chest x-ray images of elderly patients with minimal fibrosis and spondylosis of the spine, whereas other abnormal cases contain chest x-ray images of tuberculosis, pneumonia, and pulmonary edema. The proposed solution could correctly classify them as non-COVID-19 with 92% accuracy. This is a practical scenario where nonCOVID-19 cases could cover more than just a normal condition.
Classification of pneumonia from X-ray images using siamese convolutional net...TELKOMNIKA JOURNAL
Pneumonia is one of the highest global causes of deaths especially for children under 5 years old. This happened mainly because of the difficulties in identifying the cause of pneumonia. As a result, the treatment given may not be suitable for each pneumonia case. Recent studies have used deep learning approaches to obtain better classification within the cause of pneumonia. In this research, we used siamese convolutional network (SCN) to classify chest x-ray pneumonia image into 3 classes, namely normal conditions, bacterial pneumonia, and viral pneumonia. Siamese convolutional network is a neural network architecture that learns similarity knowledge between pairs of image inputs based on the differences between its features. One of the important benefits of classifying data with SCN is the availability of comparable images that can be used as a reference when determining class. Using SCN, our best model achieved 80.03% accuracy, 79.59% f1 score, and an improved result reasoning by providing the comparable images.
Performance Comparison Analysis for Medical Images Using Deep Learning Approa...IRJET Journal
This document discusses and compares several deep learning approaches for analyzing medical images, specifically chest x-rays. It first provides an abstract that outlines comparing existing technologies for analyzing chest x-rays using deep learning. It then reviews literature on models like convolutional neural networks (CNN), fully convolutional networks (FCN), lookup-based convolutional neural networks (LCNN), and deep cascade of convolutional neural networks (DCCNN) that have been applied to tasks like image segmentation, classification, and quality assessment of medical images. The document compares the performance of these models on different medical image datasets based on accuracy metrics.
Classification of COVID-19 from CT chest images using convolutional wavelet ...IJECEIAES
Analyzing X-rays and computed tomography-scan (CT scan) images using a convolutional neural network (CNN) method is a very interesting subject, especially after coronavirus disease 2019 (COVID-19) pandemic. In this paper, a study is made on 423 patients’ CT scan images from Al-Kadhimiya (Madenat Al Emammain Al Kadhmain) hospital in Baghdad, Iraq, to diagnose if they have COVID or not using CNN. The total data being tested has 15000 CT-scan images chosen in a specific way to give a correct diagnosis. The activation function used in this research is the wavelet function, which differs from CNN activation functions. The convolutional wavelet neural network (CWNN) model proposed in this paper is compared with regular convolutional neural network that uses other activation functions (exponential linear unit (ELU), rectified linear unit (ReLU), Swish, Leaky ReLU, Sigmoid), and the result is that utilizing CWNN gave better results for all performance metrics (accuracy, sensitivity, specificity, precision, and F1-score). The results obtained show that the prediction accuracies of CWNN were 99.97%, 99.9%, 99.97%, and 99.04% when using wavelet filters (rational function with quadratic poles (RASP1), (RASP2), and polynomials windowed (POLYWOG1), superposed logistic function (SLOG1)) as activation function, respectively. Using this algorithm can reduce the time required for the radiologist to detect whether a patient has COVID or not with very high accuracy.
A deep learning approach for COVID-19 and pneumonia detection from chest X-r...IJECEIAES
There has been a surge in biomedical imaging technologies with the recent advancement of deep learning. It is being used for diagnosis from X-ray, computed tomography (CT) scan, electrocardiogram (ECG), and electroencephalography (EEG) images. However, most of them are solely for particular disease detection. In this research, a computer-aided deep learning model named COVID-CXDNetV2 has been presented to detect two separate diseases, coronavirus disease 2019 (COVID-19) and pneumonia, from the X-ray images in real-time. The proposed model is made based on you only look once (YOLOv2) with residual neural network (ResNet) and trained by a vast X-ray images dataset containing 3788 samples of three classes named COVID-19 pneumonia and normal. The model has obtained the maximum overall classification accuracy of 97.9% with a loss of 0.052 for multiclass classification (COVID-19, pneumonia, and normal) and 99.8% accuracy, 99.52% sensitivity, 100% specificity with a loss of 0.001 for binary classification (COVID-19 and normal), which beats some current state-of-the-art results. Authors believe that this method will be applicable in the medical domain for the diagnosis and will significantly contribute to real life.
ARTIFICIAL INTELLIGENCE BASED COVID-19 DETECTION USING COMPUTED TOMOGRAPHY IM...IRJET Journal
This document summarizes an artificial intelligence system developed to detect COVID-19 in computed tomography (CT) images of the lungs. The system uses convolutional neural networks (CNNs) to extract features from segmented lung images and classify images as normal, COVID-19, or other lung diseases. Previous related work that used CNNs and other deep learning techniques on CT and X-ray images for COVID-19 detection is reviewed. The proposed system applies edge detection algorithms before training the CNN to enhance image contrast and improve COVID-19 detection accuracy. It also uses multi-image augmentation to increase the size and variability of the training dataset.
This document describes a study that developed a CNN model to detect COVID-19 in chest X-ray images. The researchers used a dataset of normal, pneumonia, and COVID-19 chest X-rays to train the CNN model. They extracted features from the images using HOG and CNN methods and combined the features as input to the CNN classifier. The CNN model was able to accurately classify X-ray images as COVID-19 positive or not with modifications like data augmentation and noise removal to improve performance. The study aims to provide an effective method for detecting COVID-19 using readily available X-ray machines to help address testing limitations and reduce burden on healthcare systems.
Detection Of Covid-19 From Chest X-RaysIRJET Journal
This document presents a study that aims to detect COVID-19 from chest x-rays using deep learning models. The researchers collected chest x-ray images of COVID-19 patients, people with pneumonia, and healthy individuals. They used a transfer learning approach with the Inception V3 model to extract features from the x-ray images. Additionally, they developed a customized deep neural network (DNN) for classification. The model was trained on 80% of the dataset and evaluated on the remaining 20%. Results showed the Inception V3 model achieved high accuracy, sensitivity, and F1-score for detecting COVID-19, pneumonia, and normal cases from chest x-rays. This deep learning approach holds promise for fast and accurate COVID-
One reliable way of detecting coronavirus disease 2019 (COVID-19) is using a chest x-ray image due to its complications in the lung parenchyma. This paper proposes a solution for COVID-19 detection in chest x-ray images based on a convolutional neural network (CNN). This CNN-based solution is developed using a modified InceptionV3 as a backbone architecture. Selfattention layers are inserted to modify the backbone such that the number of trainable parameters is reduced and meaningful areas of COVID-19 in chest x-ray images are focused on a training process. The proposed CNN architecture is then learned to construct a model to classify COVID-19 cases from non-COVID-19 cases. It achieves sensitivity, specificity, and accuracy values of 93%, 96%, and 96%, respectively. The model is also further validated on the so-called other normal and abnormal, which are nonCOVID-19 cases. Cases of other normal contain chest x-ray images of elderly patients with minimal fibrosis and spondylosis of the spine, whereas other abnormal cases contain chest x-ray images of tuberculosis, pneumonia, and pulmonary edema. The proposed solution could correctly classify them as non-COVID-19 with 92% accuracy. This is a practical scenario where nonCOVID-19 cases could cover more than just a normal condition.
Classification of pneumonia from X-ray images using siamese convolutional net...TELKOMNIKA JOURNAL
Pneumonia is one of the highest global causes of deaths especially for children under 5 years old. This happened mainly because of the difficulties in identifying the cause of pneumonia. As a result, the treatment given may not be suitable for each pneumonia case. Recent studies have used deep learning approaches to obtain better classification within the cause of pneumonia. In this research, we used siamese convolutional network (SCN) to classify chest x-ray pneumonia image into 3 classes, namely normal conditions, bacterial pneumonia, and viral pneumonia. Siamese convolutional network is a neural network architecture that learns similarity knowledge between pairs of image inputs based on the differences between its features. One of the important benefits of classifying data with SCN is the availability of comparable images that can be used as a reference when determining class. Using SCN, our best model achieved 80.03% accuracy, 79.59% f1 score, and an improved result reasoning by providing the comparable images.
Performance Comparison Analysis for Medical Images Using Deep Learning Approa...IRJET Journal
This document discusses and compares several deep learning approaches for analyzing medical images, specifically chest x-rays. It first provides an abstract that outlines comparing existing technologies for analyzing chest x-rays using deep learning. It then reviews literature on models like convolutional neural networks (CNN), fully convolutional networks (FCN), lookup-based convolutional neural networks (LCNN), and deep cascade of convolutional neural networks (DCCNN) that have been applied to tasks like image segmentation, classification, and quality assessment of medical images. The document compares the performance of these models on different medical image datasets based on accuracy metrics.
Classification of COVID-19 from CT chest images using convolutional wavelet ...IJECEIAES
Analyzing X-rays and computed tomography-scan (CT scan) images using a convolutional neural network (CNN) method is a very interesting subject, especially after coronavirus disease 2019 (COVID-19) pandemic. In this paper, a study is made on 423 patients’ CT scan images from Al-Kadhimiya (Madenat Al Emammain Al Kadhmain) hospital in Baghdad, Iraq, to diagnose if they have COVID or not using CNN. The total data being tested has 15000 CT-scan images chosen in a specific way to give a correct diagnosis. The activation function used in this research is the wavelet function, which differs from CNN activation functions. The convolutional wavelet neural network (CWNN) model proposed in this paper is compared with regular convolutional neural network that uses other activation functions (exponential linear unit (ELU), rectified linear unit (ReLU), Swish, Leaky ReLU, Sigmoid), and the result is that utilizing CWNN gave better results for all performance metrics (accuracy, sensitivity, specificity, precision, and F1-score). The results obtained show that the prediction accuracies of CWNN were 99.97%, 99.9%, 99.97%, and 99.04% when using wavelet filters (rational function with quadratic poles (RASP1), (RASP2), and polynomials windowed (POLYWOG1), superposed logistic function (SLOG1)) as activation function, respectively. Using this algorithm can reduce the time required for the radiologist to detect whether a patient has COVID or not with very high accuracy.
A deep learning approach for COVID-19 and pneumonia detection from chest X-r...IJECEIAES
There has been a surge in biomedical imaging technologies with the recent advancement of deep learning. It is being used for diagnosis from X-ray, computed tomography (CT) scan, electrocardiogram (ECG), and electroencephalography (EEG) images. However, most of them are solely for particular disease detection. In this research, a computer-aided deep learning model named COVID-CXDNetV2 has been presented to detect two separate diseases, coronavirus disease 2019 (COVID-19) and pneumonia, from the X-ray images in real-time. The proposed model is made based on you only look once (YOLOv2) with residual neural network (ResNet) and trained by a vast X-ray images dataset containing 3788 samples of three classes named COVID-19 pneumonia and normal. The model has obtained the maximum overall classification accuracy of 97.9% with a loss of 0.052 for multiclass classification (COVID-19, pneumonia, and normal) and 99.8% accuracy, 99.52% sensitivity, 100% specificity with a loss of 0.001 for binary classification (COVID-19 and normal), which beats some current state-of-the-art results. Authors believe that this method will be applicable in the medical domain for the diagnosis and will significantly contribute to real life.
ARTIFICIAL INTELLIGENCE BASED COVID-19 DETECTION USING COMPUTED TOMOGRAPHY IM...IRJET Journal
This document summarizes an artificial intelligence system developed to detect COVID-19 in computed tomography (CT) images of the lungs. The system uses convolutional neural networks (CNNs) to extract features from segmented lung images and classify images as normal, COVID-19, or other lung diseases. Previous related work that used CNNs and other deep learning techniques on CT and X-ray images for COVID-19 detection is reviewed. The proposed system applies edge detection algorithms before training the CNN to enhance image contrast and improve COVID-19 detection accuracy. It also uses multi-image augmentation to increase the size and variability of the training dataset.
This document describes a study that developed a CNN model to detect COVID-19 in chest X-ray images. The researchers used a dataset of normal, pneumonia, and COVID-19 chest X-rays to train the CNN model. They extracted features from the images using HOG and CNN methods and combined the features as input to the CNN classifier. The CNN model was able to accurately classify X-ray images as COVID-19 positive or not with modifications like data augmentation and noise removal to improve performance. The study aims to provide an effective method for detecting COVID-19 using readily available X-ray machines to help address testing limitations and reduce burden on healthcare systems.
Detection Of Covid-19 From Chest X-RaysIRJET Journal
This document presents a study that aims to detect COVID-19 from chest x-rays using deep learning models. The researchers collected chest x-ray images of COVID-19 patients, people with pneumonia, and healthy individuals. They used a transfer learning approach with the Inception V3 model to extract features from the x-ray images. Additionally, they developed a customized deep neural network (DNN) for classification. The model was trained on 80% of the dataset and evaluated on the remaining 20%. Results showed the Inception V3 model achieved high accuracy, sensitivity, and F1-score for detecting COVID-19, pneumonia, and normal cases from chest x-rays. This deep learning approach holds promise for fast and accurate COVID-
covid 19 detection using x ray based on neural networkArifuzzamanFaisal2
This document presents a comparative study of multiple neural network models for detecting COVID-19 from chest X-rays. It evaluates VGG16, VGG19, DenseNet121, Inception-ResNet-V2, InceptionV3, ResNet50, and Xception on a dataset of 7165 chest X-ray images of COVID-19 (1536) and pneumonia (5629) patients. Results show that DenseNet121 achieved the best performance with 99.48% accuracy, outperforming the other models in detecting and classifying COVID-19 and pneumonia cases from chest X-rays.
REVIEW ON COVID DETECTION USING X-RAY AND SYMPTOMSIRJET Journal
This document presents a review of detecting COVID-19 using chest X-rays and symptoms. It first provides background on the COVID-19 pandemic and discusses how artificial intelligence and deep learning are being used to classify medical images like chest X-rays to detect various diseases. The paper then reviews several existing studies that have used convolutional neural networks to achieve high accuracy (over 90%) in detecting COVID-19 in chest X-rays. It proposes a model that uses a CNN to analyze chest X-rays and a decision tree model to analyze reported symptoms, then integrates the results to diagnose whether a patient is COVID-19 positive or normal. The model aims to provide a low-cost and rapid method for COVID-19 detection.
Automatic COVID-19 lung images classification system based on convolution ne...IJECEIAES
Coronavirus disease (COVID-19) still has disastrous effects on human life around the world. For fight that disease. Examination on the patients who have been sucked in quick and cheap way is necessary. Radiography is most effective step closer to this target. Chest X-ray is readily obtainable and cheap option. Also, because COVID-19 is a virus, distinguish COVID-19 from common viral pneumonia from common viral pneumonia is difficult. In this study, X-ray images of 500, 500, 500, and 500 patients for healthy controls, typical viral pneumonia, bacterial pneumonia and COVID-19, were collected respectively. To our knowledge, this was the first quaternary classification study that also included classical viral pneumonia. To efficiently capture nuances in X-ray images, a new model was created by applying convolution neural network for accurate image classification. Our model outperformed to achieve an overall accuracy, sensitivity, specificity, F1-score, and area under curve (AUC) of 0.98, 0.97, 0.98, 0.97, and 0.99 respectively.
Deep learning method for lung cancer identification and classificationIAESIJAI
Lung cancer (LC) is calming many lives and is becoming a serious cause of concern. The detection of LC at an early stage assists the chances of recovery. Accuracy of detection of LC at an early stage can be improved with the help of a convolutional neural network (CNN) based deep learning approach. In this paper, we present two methodologies for Lung cancer detection (LCD) applied on Lung image database consortium (LIDC) and image database resource initiative (IDRI) data sets. Classification of these LC images is carried out using support vector machine (SVM), and deep CNN. The CNN is trained with i) multiple batches and ii) single batch for LC image classification as non cancer and cancer image. All these methods are being implemented in MATLAB. The accuracy of classification obtained by SVM is 65%, whereas deep CNN produced detection accuracy of 80% and 100% respectively for multiple and single batch training. The novelty of our experimentation is near 100% classification accuracy obtained by our deep CNN model when tested on 25 Lung computed tomography (CT) test images each of size 512×512 pixels in less than 20 iterations as compared to the research work carried out by other researchers using cropped LC nodule images.
Recognition of Corona virus disease (COVID-19) using deep learning network IJECEIAES
Corona virus disease (COVID-19) has an incredible influence in the last few months. It causes thousands of deaths in round the world. This make a rapid research movement to deal with this new virus. As a computer science, many technical researches have been done to tackle with it by using image processing algorithms. In this work, we introduce a method based on deep learning networks to classify COVID-19 based on x-ray images. Our results are encouraging to rely on to classify the infected people from the normal. We conduct our experiments on recent dataset, Kaggle dataset of COVID-19 X-ray images and using ResNet50 deep learning network with 5 and 10 folds cross validation. The experiments results show that 5 folds gives effective results than 10 folds with accuracy rate 97.28%.
Using Deep Learning and Transfer Learning for Pneumonia DetectionIRJET Journal
This document presents research on using deep learning and transfer learning models to detect pneumonia from chest x-ray images. The researchers collected a dataset of over 5,000 chest x-rays and split it into training and test sets. Data augmentation techniques were used to expand the training dataset size. Several deep learning models were trained on the data including CNN, DenseNet121, VGG16, ResNet50, and Inception v3. Transfer learning was also utilized by training models pre-trained on ImageNet. The Inception v3 model achieved the highest testing accuracy of 80.29% for pneumonia detection. The researchers concluded the deep learning models could help radiologists diagnose pneumonia but that more work is needed to localize affected lung regions.
Twin support vector machine using kernel function for colorectal cancer detec...journalBEEI
Nowadays, machine learning technology is needed in the medical field. therefore, this research is useful for solving problems in the medical field by using machine learning. Many cases of colorectal cancer are diagnosed late. When colorectal cancer is detected, the cancer is usually well developed. Machine learning is an approach that is part of artificial intelligence and can detect colorectal cancer early. This study discusses colorectal cancer detection using twin support vector machine (SVM) method and kernel function i.e. linear kernels, polynomial kernels, RBF kernels, and gaussian kernels. By comparing the accuracy and running time, then we will know which method is better in classifying the colorectal cancer dataset that we get from Al-Islam Hospital, Bandung, Indonesia. The results showed that polynomial kernels has better accuracy and running time. It can be seen with a maximum accuracy of twin SVM using polynomial kernels 86% and 0.502 seconds running time.
Covid Detection Using Lung X-ray ImagesIRJET Journal
This document describes a study that used a deep learning model to detect COVID-19 in lung x-ray images. The researchers trained a VGG-16 convolutional neural network on a dataset of over 5,800 x-ray images of both COVID-19 and normal lungs. Data augmentation techniques were used to increase the size and variation of the training dataset. The model achieved 94% accuracy in distinguishing between COVID-19 and normal x-rays. This accurate and fast COVID-19 detection using deep learning could help reduce costs and diagnostic times compared to traditional testing methods.
This document presents a convolutional neural network model to detect pneumonia from chest x-ray images. The model is trained from scratch on a dataset of over 5,800 chest x-ray images categorized into pneumonia and normal images. The model uses preprocessing like resizing and normalization, data augmentation, and a custom sequential CNN model with convolutional and pooling layers to extract features and classify images. Evaluation metrics like precision, recall, accuracy and F1 score are used to analyze the trained model's performance at detecting pneumonia from chest x-rays. The proposed system aims to help diagnose pneumonia early and assist medical professionals, especially in remote areas.
Study and Analysis of Different CNN Architectures by Detecting Covid- 19 and ...IRJET Journal
This document discusses using various CNN architectures to detect Covid-19 and pneumonia from chest X-ray images. It analyzes eight CNN models - AlexNet, DenseNet121, MobileNet, ResNet50/101, VGG16/19, and Xception - on a dataset of over 9,000 chest X-ray images categorized into normal, Covid-19, and pneumonia classes. The ResNet-50 model achieved the optimal classification accuracy. The images were preprocessed and divided into training, validation, and test sets before inputting into the CNNs. Feature extraction and model training were then performed to classify the images and present results with associated probabilities.
Pneumonia prediction on chest x-ray images using deep learning approachIAESIJAI
Coronavirus disease 2019 (COVID-19) is an infectious disease with first symptoms similar to the flu. In many cases, this disease causes pneumonia. Since pulmonary infections can be observed through radiography images, this paper investigates deep learning methods for automatically analyzing query chest x-ray images. In deep learning, computers can automatically identify useful features for the model, directly from the raw data, bypassing the difficult step of manual information refinement. The main part of the deep learning method is the focus on automatically learning data representations. Visual geometry group 16 (VGG16) and DenseNet121 are methods in deep learning. The data used is a chest x-ray of pneumonia. Data is divided into training, testing, and validation. The best method for this research case is VGG16 with 93% accuracy training and 90% accuracy testing. In this study, DenseNet121 obtained accuracy below VGG16, with 92% accuracy in training and 88% for accuracy testing. Parameters have a significant influence on the accuracy of each model, and with the parameters that have been used, the VGG16 is a method that has high accuracy and can be used to predict chest x-ray images aimed at checking pneumonia in patients.
A Review Paper on Covid-19 Detection using Deep LearningIRJET Journal
This document reviews methods for detecting COVID-19 using deep learning techniques applied to chest X-rays and CT scans. It summarizes several research papers that have used convolutional neural networks and techniques like transfer learning to analyze medical images and accurately classify patients as COVID-19 positive or normal. The research shows these deep learning models can detect COVID-19 from images with high accuracy, even outperforming traditional PCR tests. Larger datasets are still needed to improve the models. Overall, the document concludes medical image analysis with deep learning is a promising approach for fast and effective COVID-19 detection.
Deep learning for cancer tumor classification using transfer learning and fe...IJECEIAES
Deep convolutional neural networks (CNNs) represent one of the state-of-the-art methods for image classification in a variety of fields. Because the number of training dataset images in biomedical image classification is limited, transfer learning with CNNs is frequently applied. Breast cancer is one of most common types of cancer that causes death in women. Early detection and treatment of breast cancer are vital for improving survival rates. In this paper, we propose a deep neural network framework based on the transfer learning concept for detecting and classifying breast cancer histopathology images. In the proposed framework, we extract features from images using three pre-trained CNN architectures: VGG-16, ResNet50, and Inception-v3, and concatenate their extracted features, and then feed them into a fully connected (FC) layer to classify benign and malignant tumor cells in the histopathology images of the breast cancer. In comparison to the other CNN architectures that use a single CNN and many conventional classification methods, the proposed framework outperformed all other deep learning architectures and achieved an average accuracy of 98.76%.
Breast cancer detection using ensemble of convolutional neural networksIJECEIAES
Early detection leading to timely treatment in the initial stages of cancer may decrease the breast cancer death rate. We propose deep learning techniques along with image processing for the detection of tumors. The availability of online datasets and advances in graphical processing units (GPU) have promoted the application of deep learning models for the detection of breast cancer. In this paper, deep learning models using convolutional neural network (CNN) have been built to automatically classify mammograms into benign and malignant. Issues like overfitting and dataset imbalance are overcome. Experimentation has been done on two publicly available datasets, namely mammographic image analysis society (MIAS) database and digital database for screening mammography (DDSM). Robustness of the models is accomplished by merging the datasets. In our experimentation, MatConvNet has achieved an accuracy of 94.2% on the merged dataset, performing the best amongst all the CNN models used individually. Hungarian optimization algorithm is employed for selection of individual CNN models to form an ensemble. Ensemble of CNN models led to an improved performance, resulting in an accuracy of 95.7%.
Employing deep learning for lung sounds classificationIJECEIAES
This document summarizes research on classifying lung sounds using deep learning models. It presents two models: 1) A convolutional neural network (CNN) model developed from scratch to classify lung sound spectrogram images into six classes with an accuracy of 91%. 2) A transfer learning approach using the pre-trained AlexNet network on the same dataset, which achieved a higher accuracy of 94%. A comparison to prior research achieving 80% accuracy shows that the transfer learning approach was more effective for lung sound classification. The document concludes that transfer learning is an effective method for classification when datasets are small.
A Review on Covid Detection using Cross Dataset AnalysisIRJET Journal
This document provides an overview of deep learning approaches used for COVID-19 detection using cross-dataset analysis of CT scans. It discusses how cross-dataset analysis aims to improve model accuracy by handling limitations like generalization problems, dataset bias, and robustness to variation in image quality. Several studies that have used techniques like transfer learning, data augmentation, and pre-processing on CT scan datasets are summarized. The studies found that models trained on one dataset performed best on similar datasets, and accuracy dropped when testing on datasets with more variation in images. Overall, the document reviews progress in cross-dataset COVID detection using CT scans, but notes there are still opportunities to address limitations and improve model adaptation across diverse datasets.
Despite the availability of radiology devices in some health care centers, thorax diseases are considered as one of the most common health problems, especially in rural areas. By exploiting the power of the Internet of things and specific platforms to analyze a large volume of medical data, the health of a patient could be improved earlier. In this paper, the proposed model is based on pre-trained ResNet-50 for diagnosing thorax diseases. Chest x-ray images are cropped to extract the rib cage part from the chest radiographs. ResNet-50 was re-train on Chest x-ray14 dataset where a chest radiograph images are inserted into the model to determine if the person is healthy or not. In the case of an unhealthy patient, the model can classify the disease into one of the fourteen chest diseases. The results show the ability of ResNet-50 in achieving impressive performance in classifying thorax diseases.
This document describes a proposed method for classifying chest x-ray images to diagnose lung infections using convolutional neural networks (CNNs). The objectives are to examine if transfer learning from different source domains can improve performance for classifying healthy, pneumonia and COVID-19 cases using a small dataset. The proposed methodology includes collecting datasets, training a CNN model using transfer learning, evaluating performance using a confusion matrix, and identifying opportunities for future enhancement like exploring different network architectures and domains.
Lung Cancer Detection using Convolutional Neural NetworkIRJET Journal
The document presents a study on detecting lung cancer using convolutional neural networks. Specifically, it uses the YOLO framework to accurately detect lung tumors and their locations in CT images. The proposed system first collects CT images and pre-processes them before training a YOLO object detection model. The trained model is then used to detect and localize tumors in test images and provide classification. Evaluation shows the model can successfully pinpoint tumors attached to blood vessels and distinguish between different types of lung cancer. The authors aim to improve the model through expanding the dataset and exploring updated deep learning techniques.
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
More Related Content
Similar to Realistic image synthesis of COVID-19 chest X-rays using depthwise boundary equilibrium generative adversarial networks
covid 19 detection using x ray based on neural networkArifuzzamanFaisal2
This document presents a comparative study of multiple neural network models for detecting COVID-19 from chest X-rays. It evaluates VGG16, VGG19, DenseNet121, Inception-ResNet-V2, InceptionV3, ResNet50, and Xception on a dataset of 7165 chest X-ray images of COVID-19 (1536) and pneumonia (5629) patients. Results show that DenseNet121 achieved the best performance with 99.48% accuracy, outperforming the other models in detecting and classifying COVID-19 and pneumonia cases from chest X-rays.
REVIEW ON COVID DETECTION USING X-RAY AND SYMPTOMSIRJET Journal
This document presents a review of detecting COVID-19 using chest X-rays and symptoms. It first provides background on the COVID-19 pandemic and discusses how artificial intelligence and deep learning are being used to classify medical images like chest X-rays to detect various diseases. The paper then reviews several existing studies that have used convolutional neural networks to achieve high accuracy (over 90%) in detecting COVID-19 in chest X-rays. It proposes a model that uses a CNN to analyze chest X-rays and a decision tree model to analyze reported symptoms, then integrates the results to diagnose whether a patient is COVID-19 positive or normal. The model aims to provide a low-cost and rapid method for COVID-19 detection.
Automatic COVID-19 lung images classification system based on convolution ne...IJECEIAES
Coronavirus disease (COVID-19) still has disastrous effects on human life around the world. For fight that disease. Examination on the patients who have been sucked in quick and cheap way is necessary. Radiography is most effective step closer to this target. Chest X-ray is readily obtainable and cheap option. Also, because COVID-19 is a virus, distinguish COVID-19 from common viral pneumonia from common viral pneumonia is difficult. In this study, X-ray images of 500, 500, 500, and 500 patients for healthy controls, typical viral pneumonia, bacterial pneumonia and COVID-19, were collected respectively. To our knowledge, this was the first quaternary classification study that also included classical viral pneumonia. To efficiently capture nuances in X-ray images, a new model was created by applying convolution neural network for accurate image classification. Our model outperformed to achieve an overall accuracy, sensitivity, specificity, F1-score, and area under curve (AUC) of 0.98, 0.97, 0.98, 0.97, and 0.99 respectively.
Deep learning method for lung cancer identification and classificationIAESIJAI
Lung cancer (LC) is calming many lives and is becoming a serious cause of concern. The detection of LC at an early stage assists the chances of recovery. Accuracy of detection of LC at an early stage can be improved with the help of a convolutional neural network (CNN) based deep learning approach. In this paper, we present two methodologies for Lung cancer detection (LCD) applied on Lung image database consortium (LIDC) and image database resource initiative (IDRI) data sets. Classification of these LC images is carried out using support vector machine (SVM), and deep CNN. The CNN is trained with i) multiple batches and ii) single batch for LC image classification as non cancer and cancer image. All these methods are being implemented in MATLAB. The accuracy of classification obtained by SVM is 65%, whereas deep CNN produced detection accuracy of 80% and 100% respectively for multiple and single batch training. The novelty of our experimentation is near 100% classification accuracy obtained by our deep CNN model when tested on 25 Lung computed tomography (CT) test images each of size 512×512 pixels in less than 20 iterations as compared to the research work carried out by other researchers using cropped LC nodule images.
Recognition of Corona virus disease (COVID-19) using deep learning network IJECEIAES
Corona virus disease (COVID-19) has an incredible influence in the last few months. It causes thousands of deaths in round the world. This make a rapid research movement to deal with this new virus. As a computer science, many technical researches have been done to tackle with it by using image processing algorithms. In this work, we introduce a method based on deep learning networks to classify COVID-19 based on x-ray images. Our results are encouraging to rely on to classify the infected people from the normal. We conduct our experiments on recent dataset, Kaggle dataset of COVID-19 X-ray images and using ResNet50 deep learning network with 5 and 10 folds cross validation. The experiments results show that 5 folds gives effective results than 10 folds with accuracy rate 97.28%.
Using Deep Learning and Transfer Learning for Pneumonia DetectionIRJET Journal
This document presents research on using deep learning and transfer learning models to detect pneumonia from chest x-ray images. The researchers collected a dataset of over 5,000 chest x-rays and split it into training and test sets. Data augmentation techniques were used to expand the training dataset size. Several deep learning models were trained on the data including CNN, DenseNet121, VGG16, ResNet50, and Inception v3. Transfer learning was also utilized by training models pre-trained on ImageNet. The Inception v3 model achieved the highest testing accuracy of 80.29% for pneumonia detection. The researchers concluded the deep learning models could help radiologists diagnose pneumonia but that more work is needed to localize affected lung regions.
Twin support vector machine using kernel function for colorectal cancer detec...journalBEEI
Nowadays, machine learning technology is needed in the medical field. therefore, this research is useful for solving problems in the medical field by using machine learning. Many cases of colorectal cancer are diagnosed late. When colorectal cancer is detected, the cancer is usually well developed. Machine learning is an approach that is part of artificial intelligence and can detect colorectal cancer early. This study discusses colorectal cancer detection using twin support vector machine (SVM) method and kernel function i.e. linear kernels, polynomial kernels, RBF kernels, and gaussian kernels. By comparing the accuracy and running time, then we will know which method is better in classifying the colorectal cancer dataset that we get from Al-Islam Hospital, Bandung, Indonesia. The results showed that polynomial kernels has better accuracy and running time. It can be seen with a maximum accuracy of twin SVM using polynomial kernels 86% and 0.502 seconds running time.
Covid Detection Using Lung X-ray ImagesIRJET Journal
This document describes a study that used a deep learning model to detect COVID-19 in lung x-ray images. The researchers trained a VGG-16 convolutional neural network on a dataset of over 5,800 x-ray images of both COVID-19 and normal lungs. Data augmentation techniques were used to increase the size and variation of the training dataset. The model achieved 94% accuracy in distinguishing between COVID-19 and normal x-rays. This accurate and fast COVID-19 detection using deep learning could help reduce costs and diagnostic times compared to traditional testing methods.
This document presents a convolutional neural network model to detect pneumonia from chest x-ray images. The model is trained from scratch on a dataset of over 5,800 chest x-ray images categorized into pneumonia and normal images. The model uses preprocessing like resizing and normalization, data augmentation, and a custom sequential CNN model with convolutional and pooling layers to extract features and classify images. Evaluation metrics like precision, recall, accuracy and F1 score are used to analyze the trained model's performance at detecting pneumonia from chest x-rays. The proposed system aims to help diagnose pneumonia early and assist medical professionals, especially in remote areas.
Study and Analysis of Different CNN Architectures by Detecting Covid- 19 and ...IRJET Journal
This document discusses using various CNN architectures to detect Covid-19 and pneumonia from chest X-ray images. It analyzes eight CNN models - AlexNet, DenseNet121, MobileNet, ResNet50/101, VGG16/19, and Xception - on a dataset of over 9,000 chest X-ray images categorized into normal, Covid-19, and pneumonia classes. The ResNet-50 model achieved the optimal classification accuracy. The images were preprocessed and divided into training, validation, and test sets before inputting into the CNNs. Feature extraction and model training were then performed to classify the images and present results with associated probabilities.
Pneumonia prediction on chest x-ray images using deep learning approachIAESIJAI
Coronavirus disease 2019 (COVID-19) is an infectious disease with first symptoms similar to the flu. In many cases, this disease causes pneumonia. Since pulmonary infections can be observed through radiography images, this paper investigates deep learning methods for automatically analyzing query chest x-ray images. In deep learning, computers can automatically identify useful features for the model, directly from the raw data, bypassing the difficult step of manual information refinement. The main part of the deep learning method is the focus on automatically learning data representations. Visual geometry group 16 (VGG16) and DenseNet121 are methods in deep learning. The data used is a chest x-ray of pneumonia. Data is divided into training, testing, and validation. The best method for this research case is VGG16 with 93% accuracy training and 90% accuracy testing. In this study, DenseNet121 obtained accuracy below VGG16, with 92% accuracy in training and 88% for accuracy testing. Parameters have a significant influence on the accuracy of each model, and with the parameters that have been used, the VGG16 is a method that has high accuracy and can be used to predict chest x-ray images aimed at checking pneumonia in patients.
A Review Paper on Covid-19 Detection using Deep LearningIRJET Journal
This document reviews methods for detecting COVID-19 using deep learning techniques applied to chest X-rays and CT scans. It summarizes several research papers that have used convolutional neural networks and techniques like transfer learning to analyze medical images and accurately classify patients as COVID-19 positive or normal. The research shows these deep learning models can detect COVID-19 from images with high accuracy, even outperforming traditional PCR tests. Larger datasets are still needed to improve the models. Overall, the document concludes medical image analysis with deep learning is a promising approach for fast and effective COVID-19 detection.
Deep learning for cancer tumor classification using transfer learning and fe...IJECEIAES
Deep convolutional neural networks (CNNs) represent one of the state-of-the-art methods for image classification in a variety of fields. Because the number of training dataset images in biomedical image classification is limited, transfer learning with CNNs is frequently applied. Breast cancer is one of most common types of cancer that causes death in women. Early detection and treatment of breast cancer are vital for improving survival rates. In this paper, we propose a deep neural network framework based on the transfer learning concept for detecting and classifying breast cancer histopathology images. In the proposed framework, we extract features from images using three pre-trained CNN architectures: VGG-16, ResNet50, and Inception-v3, and concatenate their extracted features, and then feed them into a fully connected (FC) layer to classify benign and malignant tumor cells in the histopathology images of the breast cancer. In comparison to the other CNN architectures that use a single CNN and many conventional classification methods, the proposed framework outperformed all other deep learning architectures and achieved an average accuracy of 98.76%.
Breast cancer detection using ensemble of convolutional neural networksIJECEIAES
Early detection leading to timely treatment in the initial stages of cancer may decrease the breast cancer death rate. We propose deep learning techniques along with image processing for the detection of tumors. The availability of online datasets and advances in graphical processing units (GPU) have promoted the application of deep learning models for the detection of breast cancer. In this paper, deep learning models using convolutional neural network (CNN) have been built to automatically classify mammograms into benign and malignant. Issues like overfitting and dataset imbalance are overcome. Experimentation has been done on two publicly available datasets, namely mammographic image analysis society (MIAS) database and digital database for screening mammography (DDSM). Robustness of the models is accomplished by merging the datasets. In our experimentation, MatConvNet has achieved an accuracy of 94.2% on the merged dataset, performing the best amongst all the CNN models used individually. Hungarian optimization algorithm is employed for selection of individual CNN models to form an ensemble. Ensemble of CNN models led to an improved performance, resulting in an accuracy of 95.7%.
Employing deep learning for lung sounds classificationIJECEIAES
This document summarizes research on classifying lung sounds using deep learning models. It presents two models: 1) A convolutional neural network (CNN) model developed from scratch to classify lung sound spectrogram images into six classes with an accuracy of 91%. 2) A transfer learning approach using the pre-trained AlexNet network on the same dataset, which achieved a higher accuracy of 94%. A comparison to prior research achieving 80% accuracy shows that the transfer learning approach was more effective for lung sound classification. The document concludes that transfer learning is an effective method for classification when datasets are small.
A Review on Covid Detection using Cross Dataset AnalysisIRJET Journal
This document provides an overview of deep learning approaches used for COVID-19 detection using cross-dataset analysis of CT scans. It discusses how cross-dataset analysis aims to improve model accuracy by handling limitations like generalization problems, dataset bias, and robustness to variation in image quality. Several studies that have used techniques like transfer learning, data augmentation, and pre-processing on CT scan datasets are summarized. The studies found that models trained on one dataset performed best on similar datasets, and accuracy dropped when testing on datasets with more variation in images. Overall, the document reviews progress in cross-dataset COVID detection using CT scans, but notes there are still opportunities to address limitations and improve model adaptation across diverse datasets.
Despite the availability of radiology devices in some health care centers, thorax diseases are considered as one of the most common health problems, especially in rural areas. By exploiting the power of the Internet of things and specific platforms to analyze a large volume of medical data, the health of a patient could be improved earlier. In this paper, the proposed model is based on pre-trained ResNet-50 for diagnosing thorax diseases. Chest x-ray images are cropped to extract the rib cage part from the chest radiographs. ResNet-50 was re-train on Chest x-ray14 dataset where a chest radiograph images are inserted into the model to determine if the person is healthy or not. In the case of an unhealthy patient, the model can classify the disease into one of the fourteen chest diseases. The results show the ability of ResNet-50 in achieving impressive performance in classifying thorax diseases.
This document describes a proposed method for classifying chest x-ray images to diagnose lung infections using convolutional neural networks (CNNs). The objectives are to examine if transfer learning from different source domains can improve performance for classifying healthy, pneumonia and COVID-19 cases using a small dataset. The proposed methodology includes collecting datasets, training a CNN model using transfer learning, evaluating performance using a confusion matrix, and identifying opportunities for future enhancement like exploring different network architectures and domains.
Lung Cancer Detection using Convolutional Neural NetworkIRJET Journal
The document presents a study on detecting lung cancer using convolutional neural networks. Specifically, it uses the YOLO framework to accurately detect lung tumors and their locations in CT images. The proposed system first collects CT images and pre-processes them before training a YOLO object detection model. The trained model is then used to detect and localize tumors in test images and provide classification. Evaluation shows the model can successfully pinpoint tumors attached to blood vessels and distinguish between different types of lung cancer. The authors aim to improve the model through expanding the dataset and exploring updated deep learning techniques.
Similar to Realistic image synthesis of COVID-19 chest X-rays using depthwise boundary equilibrium generative adversarial networks (20)
Redefining brain tumor segmentation: a cutting-edge convolutional neural netw...IJECEIAES
Medical image analysis has witnessed significant advancements with deep learning techniques. In the domain of brain tumor segmentation, the ability to
precisely delineate tumor boundaries from magnetic resonance imaging (MRI)
scans holds profound implications for diagnosis. This study presents an ensemble convolutional neural network (CNN) with transfer learning, integrating
the state-of-the-art Deeplabv3+ architecture with the ResNet18 backbone. The
model is rigorously trained and evaluated, exhibiting remarkable performance
metrics, including an impressive global accuracy of 99.286%, a high-class accuracy of 82.191%, a mean intersection over union (IoU) of 79.900%, a weighted
IoU of 98.620%, and a Boundary F1 (BF) score of 83.303%. Notably, a detailed comparative analysis with existing methods showcases the superiority of
our proposed model. These findings underscore the model’s competence in precise brain tumor localization, underscoring its potential to revolutionize medical
image analysis and enhance healthcare outcomes. This research paves the way
for future exploration and optimization of advanced CNN models in medical
imaging, emphasizing addressing false positives and resource efficiency.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
Neural network optimizer of proportional-integral-differential controller par...IJECEIAES
Wide application of proportional-integral-differential (PID)-regulator in industry requires constant improvement of methods of its parameters adjustment. The paper deals with the issues of optimization of PID-regulator parameters with the use of neural network technology methods. A methodology for choosing the architecture (structure) of neural network optimizer is proposed, which consists in determining the number of layers, the number of neurons in each layer, as well as the form and type of activation function. Algorithms of neural network training based on the application of the method of minimizing the mismatch between the regulated value and the target value are developed. The method of back propagation of gradients is proposed to select the optimal training rate of neurons of the neural network. The neural network optimizer, which is a superstructure of the linear PID controller, allows increasing the regulation accuracy from 0.23 to 0.09, thus reducing the power consumption from 65% to 53%. The results of the conducted experiments allow us to conclude that the created neural superstructure may well become a prototype of an automatic voltage regulator (AVR)-type industrial controller for tuning the parameters of the PID controller.
An improved modulation technique suitable for a three level flying capacitor ...IJECEIAES
This research paper introduces an innovative modulation technique for controlling a 3-level flying capacitor multilevel inverter (FCMLI), aiming to streamline the modulation process in contrast to conventional methods. The proposed
simplified modulation technique paves the way for more straightforward and
efficient control of multilevel inverters, enabling their widespread adoption and
integration into modern power electronic systems. Through the amalgamation of
sinusoidal pulse width modulation (SPWM) with a high-frequency square wave
pulse, this controlling technique attains energy equilibrium across the coupling
capacitor. The modulation scheme incorporates a simplified switching pattern
and a decreased count of voltage references, thereby simplifying the control
algorithm.
A review on features and methods of potential fishing zoneIJECEIAES
This review focuses on the importance of identifying potential fishing zones in seawater for sustainable fishing practices. It explores features like sea surface temperature (SST) and sea surface height (SSH), along with classification methods such as classifiers. The features like SST, SSH, and different classifiers used to classify the data, have been figured out in this review study. This study underscores the importance of examining potential fishing zones using advanced analytical techniques. It thoroughly explores the methodologies employed by researchers, covering both past and current approaches. The examination centers on data characteristics and the application of classification algorithms for classification of potential fishing zones. Furthermore, the prediction of potential fishing zones relies significantly on the effectiveness of classification algorithms. Previous research has assessed the performance of models like support vector machines, naïve Bayes, and artificial neural networks (ANN). In the previous result, the results of support vector machine (SVM) were 97.6% more accurate than naive Bayes's 94.2% to classify test data for fisheries classification. By considering the recent works in this area, several recommendations for future works are presented to further improve the performance of the potential fishing zone models, which is important to the fisheries community.
Electrical signal interference minimization using appropriate core material f...IJECEIAES
As demand for smaller, quicker, and more powerful devices rises, Moore's law is strictly followed. The industry has worked hard to make little devices that boost productivity. The goal is to optimize device density. Scientists are reducing connection delays to improve circuit performance. This helped them understand three-dimensional integrated circuit (3D IC) concepts, which stack active devices and create vertical connections to diminish latency and lower interconnects. Electrical involvement is a big worry with 3D integrates circuits. Researchers have developed and tested through silicon via (TSV) and substrates to decrease electrical wave involvement. This study illustrates a novel noise coupling reduction method using several electrical involvement models. A 22% drop in electrical involvement from wave-carrying to victim TSVs introduces this new paradigm and improves system performance even at higher THz frequencies.
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network
Bibliometric analysis highlighting the role of women in addressing climate ch...IJECEIAES
Fossil fuel consumption increased quickly, contributing to climate change
that is evident in unusual flooding and draughts, and global warming. Over
the past ten years, women's involvement in society has grown dramatically,
and they succeeded in playing a noticeable role in reducing climate change.
A bibliometric analysis of data from the last ten years has been carried out to
examine the role of women in addressing the climate change. The analysis's
findings discussed the relevant to the sustainable development goals (SDGs),
particularly SDG 7 and SDG 13. The results considered contributions made
by women in the various sectors while taking geographic dispersion into
account. The bibliometric analysis delves into topics including women's
leadership in environmental groups, their involvement in policymaking, their
contributions to sustainable development projects, and the influence of
gender diversity on attempts to mitigate climate change. This study's results
highlight how women have influenced policies and actions related to climate
change, point out areas of research deficiency and recommendations on how
to increase role of the women in addressing the climate change and
achieving sustainability. To achieve more successful results, this initiative
aims to highlight the significance of gender equality and encourage
inclusivity in climate change decision-making processes.
Voltage and frequency control of microgrid in presence of micro-turbine inter...IJECEIAES
The active and reactive load changes have a significant impact on voltage
and frequency. In this paper, in order to stabilize the microgrid (MG) against
load variations in islanding mode, the active and reactive power of all
distributed generators (DGs), including energy storage (battery), diesel
generator, and micro-turbine, are controlled. The micro-turbine generator is
connected to MG through a three-phase to three-phase matrix converter, and
the droop control method is applied for controlling the voltage and
frequency of MG. In addition, a method is introduced for voltage and
frequency control of micro-turbines in the transition state from gridconnected mode to islanding mode. A novel switching strategy of the matrix
converter is used for converting the high-frequency output voltage of the
micro-turbine to the grid-side frequency of the utility system. Moreover,
using the switching strategy, the low-order harmonics in the output current
and voltage are not produced, and consequently, the size of the output filter
would be reduced. In fact, the suggested control strategy is load-independent
and has no frequency conversion restrictions. The proposed approach for
voltage and frequency regulation demonstrates exceptional performance and
favorable response across various load alteration scenarios. The suggested
strategy is examined in several scenarios in the MG test systems, and the
simulation results are addressed.
Enhancing battery system identification: nonlinear autoregressive modeling fo...IJECEIAES
Precisely characterizing Li-ion batteries is essential for optimizing their
performance, enhancing safety, and prolonging their lifespan across various
applications, such as electric vehicles and renewable energy systems. This
article introduces an innovative nonlinear methodology for system
identification of a Li-ion battery, employing a nonlinear autoregressive with
exogenous inputs (NARX) model. The proposed approach integrates the
benefits of nonlinear modeling with the adaptability of the NARX structure,
facilitating a more comprehensive representation of the intricate
electrochemical processes within the battery. Experimental data collected
from a Li-ion battery operating under diverse scenarios are employed to
validate the effectiveness of the proposed methodology. The identified
NARX model exhibits superior accuracy in predicting the battery's behavior
compared to traditional linear models. This study underscores the
importance of accounting for nonlinearities in battery modeling, providing
insights into the intricate relationships between state-of-charge, voltage, and
current under dynamic conditions.
Smart grid deployment: from a bibliometric analysis to a surveyIJECEIAES
Smart grids are one of the last decades' innovations in electrical energy.
They bring relevant advantages compared to the traditional grid and
significant interest from the research community. Assessing the field's
evolution is essential to propose guidelines for facing new and future smart
grid challenges. In addition, knowing the main technologies involved in the
deployment of smart grids (SGs) is important to highlight possible
shortcomings that can be mitigated by developing new tools. This paper
contributes to the research trends mentioned above by focusing on two
objectives. First, a bibliometric analysis is presented to give an overview of
the current research level about smart grid deployment. Second, a survey of
the main technological approaches used for smart grid implementation and
their contributions are highlighted. To that effect, we searched the Web of
Science (WoS), and the Scopus databases. We obtained 5,663 documents
from WoS and 7,215 from Scopus on smart grid implementation or
deployment. With the extraction limitation in the Scopus database, 5,872 of
the 7,215 documents were extracted using a multi-step process. These two
datasets have been analyzed using a bibliometric tool called bibliometrix.
The main outputs are presented with some recommendations for future
research.
Use of analytical hierarchy process for selecting and prioritizing islanding ...IJECEIAES
One of the problems that are associated to power systems is islanding
condition, which must be rapidly and properly detected to prevent any
negative consequences on the system's protection, stability, and security.
This paper offers a thorough overview of several islanding detection
strategies, which are divided into two categories: classic approaches,
including local and remote approaches, and modern techniques, including
techniques based on signal processing and computational intelligence.
Additionally, each approach is compared and assessed based on several
factors, including implementation costs, non-detected zones, declining
power quality, and response times using the analytical hierarchy process
(AHP). The multi-criteria decision-making analysis shows that the overall
weight of passive methods (24.7%), active methods (7.8%), hybrid methods
(5.6%), remote methods (14.5%), signal processing-based methods (26.6%),
and computational intelligent-based methods (20.8%) based on the
comparison of all criteria together. Thus, it can be seen from the total weight
that hybrid approaches are the least suitable to be chosen, while signal
processing-based methods are the most appropriate islanding detection
method to be selected and implemented in power system with respect to the
aforementioned factors. Using Expert Choice software, the proposed
hierarchy model is studied and examined.
Enhancing of single-stage grid-connected photovoltaic system using fuzzy logi...IJECEIAES
The power generated by photovoltaic (PV) systems is influenced by
environmental factors. This variability hampers the control and utilization of
solar cells' peak output. In this study, a single-stage grid-connected PV
system is designed to enhance power quality. Our approach employs fuzzy
logic in the direct power control (DPC) of a three-phase voltage source
inverter (VSI), enabling seamless integration of the PV connected to the
grid. Additionally, a fuzzy logic-based maximum power point tracking
(MPPT) controller is adopted, which outperforms traditional methods like
incremental conductance (INC) in enhancing solar cell efficiency and
minimizing the response time. Moreover, the inverter's real-time active and
reactive power is directly managed to achieve a unity power factor (UPF).
The system's performance is assessed through MATLAB/Simulink
implementation, showing marked improvement over conventional methods,
particularly in steady-state and varying weather conditions. For solar
irradiances of 500 and 1,000 W/m2
, the results show that the proposed
method reduces the total harmonic distortion (THD) of the injected current
to the grid by approximately 46% and 38% compared to conventional
methods, respectively. Furthermore, we compare the simulation results with
IEEE standards to evaluate the system's grid compatibility.
Enhancing photovoltaic system maximum power point tracking with fuzzy logic-b...IJECEIAES
Photovoltaic systems have emerged as a promising energy resource that
caters to the future needs of society, owing to their renewable, inexhaustible,
and cost-free nature. The power output of these systems relies on solar cell
radiation and temperature. In order to mitigate the dependence on
atmospheric conditions and enhance power tracking, a conventional
approach has been improved by integrating various methods. To optimize
the generation of electricity from solar systems, the maximum power point
tracking (MPPT) technique is employed. To overcome limitations such as
steady-state voltage oscillations and improve transient response, two
traditional MPPT methods, namely fuzzy logic controller (FLC) and perturb
and observe (P&O), have been modified. This research paper aims to
simulate and validate the step size of the proposed modified P&O and FLC
techniques within the MPPT algorithm using MATLAB/Simulink for
efficient power tracking in photovoltaic systems.
Adaptive synchronous sliding control for a robot manipulator based on neural ...IJECEIAES
Robot manipulators have become important equipment in production lines, medical fields, and transportation. Improving the quality of trajectory tracking for
robot hands is always an attractive topic in the research community. This is a
challenging problem because robot manipulators are complex nonlinear systems
and are often subject to fluctuations in loads and external disturbances. This
article proposes an adaptive synchronous sliding control scheme to improve trajectory tracking performance for a robot manipulator. The proposed controller
ensures that the positions of the joints track the desired trajectory, synchronize
the errors, and significantly reduces chattering. First, the synchronous tracking
errors and synchronous sliding surfaces are presented. Second, the synchronous
tracking error dynamics are determined. Third, a robust adaptive control law is
designed,the unknown components of the model are estimated online by the neural network, and the parameters of the switching elements are selected by fuzzy
logic. The built algorithm ensures that the tracking and approximation errors
are ultimately uniformly bounded (UUB). Finally, the effectiveness of the constructed algorithm is demonstrated through simulation and experimental results.
Simulation and experimental results show that the proposed controller is effective with small synchronous tracking errors, and the chattering phenomenon is
significantly reduced.
Remote field-programmable gate array laboratory for signal acquisition and de...IJECEIAES
A remote laboratory utilizing field-programmable gate array (FPGA) technologies enhances students’ learning experience anywhere and anytime in embedded system design. Existing remote laboratories prioritize hardware access and visual feedback for observing board behavior after programming, neglecting comprehensive debugging tools to resolve errors that require internal signal acquisition. This paper proposes a novel remote embeddedsystem design approach targeting FPGA technologies that are fully interactive via a web-based platform. Our solution provides FPGA board access and debugging capabilities beyond the visual feedback provided by existing remote laboratories. We implemented a lab module that allows users to seamlessly incorporate into their FPGA design. The module minimizes hardware resource utilization while enabling the acquisition of a large number of data samples from the signal during the experiments by adaptively compressing the signal prior to data transmission. The results demonstrate an average compression ratio of 2.90 across three benchmark signals, indicating efficient signal acquisition and effective debugging and analysis. This method allows users to acquire more data samples than conventional methods. The proposed lab allows students to remotely test and debug their designs, bridging the gap between theory and practice in embedded system design.
Detecting and resolving feature envy through automated machine learning and m...IJECEIAES
Efficiently identifying and resolving code smells enhances software project quality. This paper presents a novel solution, utilizing automated machine learning (AutoML) techniques, to detect code smells and apply move method refactoring. By evaluating code metrics before and after refactoring, we assessed its impact on coupling, complexity, and cohesion. Key contributions of this research include a unique dataset for code smell classification and the development of models using AutoGluon for optimal performance. Furthermore, the study identifies the top 20 influential features in classifying feature envy, a well-known code smell, stemming from excessive reliance on external classes. We also explored how move method refactoring addresses feature envy, revealing reduced coupling and complexity, and improved cohesion, ultimately enhancing code quality. In summary, this research offers an empirical, data-driven approach, integrating AutoML and move method refactoring to optimize software project quality. Insights gained shed light on the benefits of refactoring on code quality and the significance of specific features in detecting feature envy. Future research can expand to explore additional refactoring techniques and a broader range of code metrics, advancing software engineering practices and standards.
Smart monitoring technique for solar cell systems using internet of things ba...IJECEIAES
Rapidly and remotely monitoring and receiving the solar cell systems status parameters, solar irradiance, temperature, and humidity, are critical issues in enhancement their efficiency. Hence, in the present article an improved smart prototype of internet of things (IoT) technique based on embedded system through NodeMCU ESP8266 (ESP-12E) was carried out experimentally. Three different regions at Egypt; Luxor, Cairo, and El-Beheira cities were chosen to study their solar irradiance profile, temperature, and humidity by the proposed IoT system. The monitoring data of solar irradiance, temperature, and humidity were live visualized directly by Ubidots through hypertext transfer protocol (HTTP) protocol. The measured solar power radiation in Luxor, Cairo, and El-Beheira ranged between 216-1000, 245-958, and 187-692 W/m 2 respectively during the solar day. The accuracy and rapidity of obtaining monitoring results using the proposed IoT system made it a strong candidate for application in monitoring solar cell systems. On the other hand, the obtained solar power radiation results of the three considered regions strongly candidate Luxor and Cairo as suitable places to build up a solar cells system station rather than El-Beheira.
An efficient security framework for intrusion detection and prevention in int...IJECEIAES
Over the past few years, the internet of things (IoT) has advanced to connect billions of smart devices to improve quality of life. However, anomalies or malicious intrusions pose several security loopholes, leading to performance degradation and threat to data security in IoT operations. Thereby, IoT security systems must keep an eye on and restrict unwanted events from occurring in the IoT network. Recently, various technical solutions based on machine learning (ML) models have been derived towards identifying and restricting unwanted events in IoT. However, most ML-based approaches are prone to miss-classification due to inappropriate feature selection. Additionally, most ML approaches applied to intrusion detection and prevention consider supervised learning, which requires a large amount of labeled data to be trained. Consequently, such complex datasets are impossible to source in a large network like IoT. To address this problem, this proposed study introduces an efficient learning mechanism to strengthen the IoT security aspects. The proposed algorithm incorporates supervised and unsupervised approaches to improve the learning models for intrusion detection and mitigation. Compared with the related works, the experimental outcome shows that the model performs well in a benchmark dataset. It accomplishes an improved detection accuracy of approximately 99.21%.
ACEP Magazine edition 4th launched on 05.06.2024Rahul
This document provides information about the third edition of the magazine "Sthapatya" published by the Association of Civil Engineers (Practicing) Aurangabad. It includes messages from current and past presidents of ACEP, memories and photos from past ACEP events, information on life time achievement awards given by ACEP, and a technical article on concrete maintenance, repairs and strengthening. The document highlights activities of ACEP and provides a technical educational article for members.
Harnessing WebAssembly for Real-time Stateless Streaming PipelinesChristina Lin
Traditionally, dealing with real-time data pipelines has involved significant overhead, even for straightforward tasks like data transformation or masking. However, in this talk, we’ll venture into the dynamic realm of WebAssembly (WASM) and discover how it can revolutionize the creation of stateless streaming pipelines within a Kafka (Redpanda) broker. These pipelines are adept at managing low-latency, high-data-volume scenarios.
Introduction- e - waste – definition - sources of e-waste– hazardous substances in e-waste - effects of e-waste on environment and human health- need for e-waste management– e-waste handling rules - waste minimization techniques for managing e-waste – recycling of e-waste - disposal treatment methods of e- waste – mechanism of extraction of precious metal from leaching solution-global Scenario of E-waste – E-waste in India- case studies.
International Conference on NLP, Artificial Intelligence, Machine Learning an...gerogepatton
International Conference on NLP, Artificial Intelligence, Machine Learning and Applications (NLAIM 2024) offers a premier global platform for exchanging insights and findings in the theory, methodology, and applications of NLP, Artificial Intelligence, Machine Learning, and their applications. The conference seeks substantial contributions across all key domains of NLP, Artificial Intelligence, Machine Learning, and their practical applications, aiming to foster both theoretical advancements and real-world implementations. With a focus on facilitating collaboration between researchers and practitioners from academia and industry, the conference serves as a nexus for sharing the latest developments in the field.
CHINA’S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECTjpsjournal1
The rivalry between prominent international actors for dominance over Central Asia's hydrocarbon
reserves and the ancient silk trade route, along with China's diplomatic endeavours in the area, has been
referred to as the "New Great Game." This research centres on the power struggle, considering
geopolitical, geostrategic, and geoeconomic variables. Topics including trade, political hegemony, oil
politics, and conventional and nontraditional security are all explored and explained by the researcher.
Using Mackinder's Heartland, Spykman Rimland, and Hegemonic Stability theories, examines China's role
in Central Asia. This study adheres to the empirical epistemological method and has taken care of
objectivity. This study analyze primary and secondary research documents critically to elaborate role of
china’s geo economic outreach in central Asian countries and its future prospect. China is thriving in trade,
pipeline politics, and winning states, according to this study, thanks to important instruments like the
Shanghai Cooperation Organisation and the Belt and Road Economic Initiative. According to this study,
China is seeing significant success in commerce, pipeline politics, and gaining influence on other
governments. This success may be attributed to the effective utilisation of key tools such as the Shanghai
Cooperation Organisation and the Belt and Road Economic Initiative.
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressionsVictor Morales
K8sGPT is a tool that analyzes and diagnoses Kubernetes clusters. This presentation was used to share the requirements and dependencies to deploy K8sGPT in a local environment.
Use PyCharm for remote debugging of WSL on a Windo cf5c162d672e4e58b4dde5d797...shadow0702a
This document serves as a comprehensive step-by-step guide on how to effectively use PyCharm for remote debugging of the Windows Subsystem for Linux (WSL) on a local Windows machine. It meticulously outlines several critical steps in the process, starting with the crucial task of enabling permissions, followed by the installation and configuration of WSL.
The guide then proceeds to explain how to set up the SSH service within the WSL environment, an integral part of the process. Alongside this, it also provides detailed instructions on how to modify the inbound rules of the Windows firewall to facilitate the process, ensuring that there are no connectivity issues that could potentially hinder the debugging process.
The document further emphasizes on the importance of checking the connection between the Windows and WSL environments, providing instructions on how to ensure that the connection is optimal and ready for remote debugging.
It also offers an in-depth guide on how to configure the WSL interpreter and files within the PyCharm environment. This is essential for ensuring that the debugging process is set up correctly and that the program can be run effectively within the WSL terminal.
Additionally, the document provides guidance on how to set up breakpoints for debugging, a fundamental aspect of the debugging process which allows the developer to stop the execution of their code at certain points and inspect their program at those stages.
Finally, the document concludes by providing a link to a reference blog. This blog offers additional information and guidance on configuring the remote Python interpreter in PyCharm, providing the reader with a well-rounded understanding of the process.
Comparative analysis between traditional aquaponics and reconstructed aquapon...bijceesjournal
The aquaponic system of planting is a method that does not require soil usage. It is a method that only needs water, fish, lava rocks (a substitute for soil), and plants. Aquaponic systems are sustainable and environmentally friendly. Its use not only helps to plant in small spaces but also helps reduce artificial chemical use and minimizes excess water use, as aquaponics consumes 90% less water than soil-based gardening. The study applied a descriptive and experimental design to assess and compare conventional and reconstructed aquaponic methods for reproducing tomatoes. The researchers created an observation checklist to determine the significant factors of the study. The study aims to determine the significant difference between traditional aquaponics and reconstructed aquaponics systems propagating tomatoes in terms of height, weight, girth, and number of fruits. The reconstructed aquaponics system’s higher growth yield results in a much more nourished crop than the traditional aquaponics system. It is superior in its number of fruits, height, weight, and girth measurement. Moreover, the reconstructed aquaponics system is proven to eliminate all the hindrances present in the traditional aquaponics system, which are overcrowding of fish, algae growth, pest problems, contaminated water, and dead fish.
ISPM 15 Heat Treated Wood Stamps and why your shipping must have one
Realistic image synthesis of COVID-19 chest X-rays using depthwise boundary equilibrium generative adversarial networks
1. International Journal of Electrical and Computer Engineering (IJECE)
Vol. 12, No. 5, October 2022, pp. 5444~5454
ISSN: 2088-8708, DOI: 10.11591/ijece.v12i5.pp5444-5454 5444
Journal homepage: http://ijece.iaescore.com
Realistic image synthesis of COVID-19 chest X-rays using
depthwise boundary equilibrium generative adversarial
networks
Zendi Iklima, Trie Maya Kadarina, Eko Ihsanto
Department of Electrical Engineering, Faculty of Engineering, Universitas Mercu Buana, Jakarta, Indonesia
Article Info ABSTRACT
Article history:
Received May 30, 2021
Revised Jun 8, 2022
Accepted Jun 25, 2022
Researchers in various related fields research preventing and controlling the
spread of the coronavirus disease (COVID-19) virus. The spread of the
COVID-19 is increasing exponentially and infecting humans massively.
Preliminary detection can be observed by looking at abnormal conditions in
the airways, thus allowing the entry of the virus into the patient's respiratory
tract, which can be represented using computer tomography (CT) scan and
chest X-ray (CXR) imaging. Particular deep learning approaches have been
developed to classify COVID-19 CT or CXR images such as convolutional
neural network (CNN), and deep convolutional neural network (DCNN).
However, COVID-19 CXR dataset was measly opened and accessed.
Particular deep learning method performance can be improved by
augmenting the dataset amount. Therefore, the COVID-19 CXR dataset
was possibly augmented by generating the synthetic image. This study
discusses a fast and real-like image synthesis approach, namely depthwise
boundary equilibrium generative adversarial network (DepthwiseBEGAN).
DepthwiseBEGAN was reduced memory load 70.11% in training processes
compared to the conventional BEGAN. DepthwiseBEGAN synthetic images
were inspected by measuring the Fréchet inception distance (FID) score with
the real-to-real score equal to 4.3866 and real-to-fake score equal to 4.4674.
Moreover, generated DepthwiseBEGAN synthetic images improve 22.59%
accuracy of conventional CNN models.
Keywords:
Chest X-ray
Convolutional neural network
COVID-19 virus
DepthwiseBEGAN
Image synthesis
This is an open access article under the CC BY-SA license.
Corresponding Author:
Zendi Iklima
Department of Electrical Engineering, Universitas Mercu Buana
Jakarta, Indonesia
Email: zendi.iklima@mercubuana.ac.id
1. INTRODUCTION
SARS-Cov-2 or known as coronavirus disease (COVID-19) was first identified in Wuhan, China [1]
and designated by World Health Organization (WHO) as a global epidemic [2] because it has infected all
corners of the world. At the end of February 2021, there were 116,521,281 confirmed cases, with
116,521,281 active cases and 2,589,548 cases of death. Meanwhile, the spread of COVID-19 in Indonesia
also continues to increase, with February 2021 confirmed 1,379,662 cases, with 14,518 active cases, and
37,266 death cases [3]. Therefore, steps are needed to prevent, detect, and control the spread of the COVID-
19 virus.
Early detection is one way to break the chain of the COVID-19 virus spread. When a patient is
known to be positive for COVID-19, he will undergo a quarantine period so that the chain of spread can be
broken by tracking the people who had interacted with the patient. One of the tests that can be done in early
2. Int J Elec & Comp Eng ISSN: 2088-8708
Realistic image synthesis of COVID-19 chest X-rays using depthwise … (Zendi Iklima)
5445
detection of COVID-19 virus is by conducting a test called reverse transcription-polymerase chain reaction
(RT-PCR) to find out whether a patient is indicated as positive or negative from the COVID-19 virus
infection. As time goes by, the pandemic condition continues even though the data on the spread is still high.
Unfortunately, RT-PCR testing has less accurate results (40% to 60%) [4], [5] in determining positive or
negative status of being infected with the COVID-19 virus [6], [7].
Alternative methods to detect the spread of the COVID-19 virus are through chest screening,
namely computer tomography (CT) scan and chest X-ray (CXR) [8]. The resulting image of CT or X-ray has
higher sensitivity than testing using RT-PCR. Thus, many automation systems have been developed in CT
and X-rays image processing [6]. Displaying images via CT can detect COVID-19 virus infection. However,
the procedure for testing via CT is costly, as well as age-restricted and forbidden for pregnant women
because of the radiation. Therefore, studies in [9] use the CXR process that can be used for easy, fast, and
inexpensive testing. Various deep learning methods are used as an automation system for CXR image
processing to support the detection process for the COVID-19 virus infection. The convolutional neural
network (CNN) method obtained accuracy in positive/negative CXR classification of 98.50% [10], while the
deep neural network (DNN) method received accuracy in the positive/negative CXR classification of 98.08%
[4]. CXR classification using deep CNN (DCNN) has an accuracy of 87.3% [11], while classification using
generative adversarial networks (GANs) have an accuracy of 95% [12]. The depthwise separable convolution
(DSC) network has an accuracy of 99.50% [13], and the COVIDX-Net has an accuracy of 91% [14]. On the
other hand, public dataset images of CT or X-ray are limited. However, the classification method can be
tuned by augmenting the dataset.
The deep learning approach is an interesting topic to develop an automation system to diagnose
CXR images of the COVID-19 virus. LightCovidNet, which consists of a lightweight CNN (LW-CNN) and
GANs with a frontal CXR dataset of 446 (resolution 1024X1024 pixels), with network filters to 841.771
parameters successfully trained the data with an accuracy of 96.97%. The separable convolution technique
can reduce the memory load when processing training data (training data) 27 times more efficiently than
conventional CNN, which consists of 23,567,299 parameters [15]. CovidGAN which consists of CNN and
auxiliary classifier GANs (AC-GANs) methods using 403 CXR datasets (14,000,000 parameters) increases
the accuracy of conventional GANs data augmentation 85% to 95% [12]. Covid-Net using the DCNN
method using 13.975 CXR datasets (CovidX dataset of 11,750,000 parameters) yields an accuracy of 93.3%
[16]. Coro-Net using the DNN method using 125 CXR datasets (33,915,436 parameters) yields an accuracy
of 95% [17]. RANDGAN (randomized GAN) is ANO-GANs, using 573 CXR datasets resulting in an
accuracy of 71% [5]. GANs and ResNet18 used the 5863 CXR datasets resulting in an accuracy of 99% [18].
To improve the model performance of the classification methods, we proposed a new architecture
called DepthwiseBEGAN in which combining depthwise separable convolution (DSC) and BEGAN. This
approach proposes augmented synthetic images of COVID-19 CXR dataset using DCGAN, DeptwiseGAN,
BEGAN, and DepthwiseBEGAN. To exhibit DepthwiseBEGAN reduces the training load while the synthetic
images are generated. In this research, we also measured the quality of generated images using Fréchet
inception distance (FID). Additionally, the improvement of the classification method using generated
synthetic images as fake CXR datasets is presented in this paper. Several classification models are used, such as:
ResNet18, ResNet34, ResNet50, and GoogleNet., such as ResNet18, ResNet34, ResNet50, and GoogleNet.
2. RESEARCH METHOD
2.1. Depthwise separable convolution
CNN is a subclass of DNNs that can solve vision problems. CNN consists of the primary process,
namely features extraction and fully connected layer. The convolutional layer is the fundamental layer of
CNN that determines the characteristics of the image pattern as an input matrix to traverse through filters.
By assuming ( 𝑥𝑖𝑙+1
𝑙
) as an input tensor which consist of triple index such as height (ℎ𝑙), width (𝑤𝑙), and
depth (𝑑𝑙). Spatial location of (ℎ𝑙
, 𝑤𝑙) utilized from bank filter of 𝑓. 𝑑𝑙
is a receptive field in 𝑥𝑙
. Therefore,
the output of CNN layer can be denoted as [18], [19]:
𝑦𝑤𝑙+1,ℎ𝑙+1,𝑑 = ∑ ∑ ∑ 𝑓𝑤,ℎ,𝑑 × 𝑥ℎ𝑙+1+ℎ,𝑤𝑙+1+𝑤,𝑑
𝑙
𝐷
𝑑=0
𝐻
ℎ=0
𝑊
𝑤=0 (1)
where 𝐷𝑓𝑥𝐷𝑓 is an input matrix per M channels, therefore, total parameters in a kernel formulated as (2).
𝐷𝑘
2
× 𝐷𝑐
2
× 𝑀 × 𝑁 (2)
CNN models with high-resolution images require more memory allocation due to the number of
convolutional parameters in the kernel that must be calculated as vectors. Therefore, several CNN models
3. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 12, No. 5, October 2022: 5444-5454
5446
can be simplified by reducing convolutional trainable parameters. DSC is a model that effectively reduces the
number of convolutional parameters and matrix calculations with the limitations of precision. Conventional
CNN utilized a convolutional kernel with the same input channel so that the matrix calculation is carried out
by convolution channels per 𝑁 channel, with the total parameters shown in (4) [20]. DSC consists of 2
convolution processes, namely depthwise convolution and pointwise convolution. Based on (1), DSC to
distribute feature learning, namely depthwise and pointwise [2], formulated as [18], [19], [21]:
𝑦𝑤𝑙+1,ℎ𝑙+1,𝑑 = ∑ 𝑓𝑑 × ∑ ∑ 𝑓𝑤,ℎ × 𝑥ℎ𝑙+1+ℎ,𝑤𝑙+1+𝑤
𝑙
𝑊
𝑤=0
𝐻
ℎ=0
𝐷
𝑑=0 (3)
where 𝑓𝑑 is a pointwise in which 1 1
convolution layer. Figure 1 represent DSCN parameters utilized for
computing the parameter load for each training process.
Figure 1. DSCN parameters in CXR of COVID-19
The number of parameters in the DSCN per 1 channel is denoted in (4). However, the total
parameters in the DSCN are the total parameters in the depthwise convolution and pointwise convolution
calculated as shown in (4). Compare to the (2), a comparison of the number of parameters on the CNN
standard with the DSCN shown in (5) [22]. For example, if a convolution has 𝑁 = 1024 and 𝐷𝑘 = 3, there
will be a reduction in the convolution parameters in the training process by 0.112 or 0.888 which mean DSC
is able to reduce the training load than conventional CNN.
(𝐷𝑐
2
× 𝑀)(𝐷𝑘
2
× 𝑁) (4)
𝑝𝐷𝑆𝐶𝑁
𝑝𝐶𝑁𝑁
=
(𝐷𝑐
2×𝑀)(𝐷𝑘
2
×𝑁)
(𝐷𝑘
2×𝐷𝑐
2×𝑀×𝑁)
=
(𝐷𝑘
2
×𝑁)
(𝐷𝑘
2×𝑁)
(5)
2.2. Deep convolution generative adversarial network
GANs were introduced in 2014 by Goodfellow which states that GANs consist of two networks
namely generator network (𝐺) and discriminator network (𝐷). Both models are trained using the mini-max
concept. Generator model 𝐺(𝑥; 𝜃𝑔), can train noise data label on 𝑃𝑧(𝑧) distribution data against x label or
real data label. Discriminator model 𝐷(𝑥; 𝜃𝑑), trains the 𝑃
𝑔 distribution data be able to estimate the
distribution data 𝑃𝐷𝑎𝑡𝑎(𝑥) [23]. The data distribution 𝑃𝐷𝑎𝑡𝑎(𝑥) is a positive image of CXR COVID-19 [24].
Generator model 𝐺(𝑧; 𝜃𝑔), minimizes the probability data distribution in the fake dataset 𝑧~𝑃𝑧 that
formulated as (6) [25]:
min
𝐺
V(𝐺) = 𝔼𝑧~𝑃𝑧
[𝑙𝑜𝑔 (1 − 𝐷(𝐺(𝑧)))] (6)
As shown in (6) shows the generator networks able randomized noise data distribution 𝑃𝑧(𝑧) to fool
discriminator network which labelled in data distribution of 𝑧~𝑃𝐷𝑎𝑡𝑎(𝑥). Thus, Discriminator model 𝐷(𝑥; 𝜃𝑑)
maximize the probability data distribution in 𝑃𝐷𝑎𝑡𝑎(𝑥) formulated as (7) [25].
max
𝐷
V(𝐺) = 𝔼𝑥~𝑃𝑑𝑎𝑡𝑎(𝑥)
[𝑙𝑜𝑔(𝐷(𝑥))] (7)
4. Int J Elec & Comp Eng ISSN: 2088-8708
Realistic image synthesis of COVID-19 chest X-rays using depthwise … (Zendi Iklima)
5447
Therefore, GANs mini-max term based on (6) and (7) can be formulated as (8) [25]:
( )
~ ~
min max ( , ) [log( ( ))] [log(1 ( ( )))]
data x z
adv x P src z P src
G D
V G D D x D G z
= = + −
L E E (8)
where 𝔼(. ) denotes as network expectation given by generator network and discriminator network, 𝑉(𝐺, 𝐷)
is a training criterion of discriminator network given by generator network, where 𝐷: 𝑥 → {𝐷𝑠𝑟𝑐(𝑥), 𝐷𝑐𝑙𝑠(𝑥)}
denotes discriminator probability distributions over both source and its labels. Both discriminator network
𝐷(. ) and generator network 𝐺(. ) is able to be optimized using given objective functions formulated as [23]:
ℒ𝑐𝑙𝑠
𝑟
= 𝔼𝑥~𝑃𝑑𝑎𝑡𝑎(𝑥)
[−𝑙𝑜𝑔𝐷𝑐𝑙𝑠(𝑥)] (9)
ℒ𝑐𝑙𝑠
𝑓
= 𝔼𝑧~𝑃𝑧
[−𝑙𝑜𝑔𝐷𝑐𝑙𝑠(𝐺(𝑧))] (10)
ℒ𝑟𝑒𝑐 = 𝔼𝑥~𝑃𝑧
[‖𝑥 − 𝐺𝑥(𝐺(𝑧))‖1
] (11)
where 𝜆𝑐𝑙𝑠denotes as hyperparameters that optimize domain classification loss of discriminator (ℒ𝑐𝑙𝑠
𝑅
) and
generator (ℒ𝑐𝑙𝑠
𝑓
). 𝜆𝑟𝑒𝑠 denotes as hyperparameters that optimize reconstruction loss (ℒ𝑟𝑒𝑠) that adopt 𝐿1
normalization [23]. ℒ𝑟𝑒𝑠 translated 𝐺(𝑧) into 𝑥~𝑃𝑧which mean that the generator 𝐺𝑥(. ) tries to reconstruct
fake labels into real labels.
2.3. Depthwise boundary equilibrium GAN
Figure 2 shows DepthwiseBEGAN architecture given an input image shape (32, 32, 3). DSConv is
depthwise separable convolution layer which contains depthwise layer and pointwise layer. Down-sample
size transformed from given input shape 32×32 into 4×4 and 8×8.
Figure 2. Architecture of DepthwiseBEGAN of COVID-19 CXR images
The term of equilibrium to balance auto-encode real dataset (𝑥; 𝜃𝑑) and discriminate 𝐺(𝑧; 𝜃𝑔) which
equalized as (12) [26]:
𝔼[ℒ(𝐺(𝑧))] = 𝛾𝔼(ℒ(𝑥)) (12)
5. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 12, No. 5, October 2022: 5444-5454
5448
where 𝛾 denotes as diversity ratio, which maintains the equilibrium using proportional control theory
(𝑘𝑡 ∈ [0,1]). Based on (12), boundary equilibrium GAN (BEGAN) represent as an objective function as
(13)-(15) [27], [28]:
ℒ𝐷 = ℒ(𝑥) − 𝑘𝑡. ℒ(𝐺(𝑧𝐷)) (13)
ℒ𝐺 = ℒ(𝐺(𝑧𝐺)) (14)
1
( ( ) ( ( )))
t t k G
k k x G z
+
+ −
= L L
g (15)
where 𝜆𝑘 is a proportional gain for k , ℒ(𝑥) = 𝔼𝑥~𝑃𝑑𝑎𝑡𝑎(𝑥)
[‖𝐷𝑐𝑙𝑠(𝑥) − 𝑥‖1] denotes as auto-encoder loss
𝐿1of data distribution 𝑥~𝑃𝐷𝑎𝑡𝑎(𝑥), and ℒ(𝐺(𝑧)) = 𝔼𝑥~𝑃𝑧
[‖𝐷𝑐𝑙𝑠(𝐺(𝑧)) − 𝐺(𝑧)‖1
] denotes as auto-encoder
loss 𝐿1of data distribution 𝑧~𝑃𝑧. Essentially, (16) shows a form of closed-loop feedback control, in which 𝑘𝑡
is adjusted at each step 𝑡 + 1. The equilibrium constraint manages the training process to yielding to ℒ(𝑥) >
ℒ(𝐺(𝑧)). Therefore, convergence global measurement of the equilibrium which denotes as (16) [27], [28].
ℳ𝑔𝑙𝑜𝑏𝑎𝑙 = ℒ(𝑥) + |𝛾ℒ(𝑥) − ℒ(𝐺(𝑧𝐺)) (16)
2.4. Fréchet inception distance
Fréchet inception distance (FID) is utilized as a metric to assess image quality of GANs which
approximates the distribution of fake generated images 𝐷𝑐𝑙𝑠(𝐺(𝑧𝐺)) with the distribution of real images
𝐷𝑐𝑙𝑠(𝑥) that were used to train the generator as multivariate Gaussians as (17) [29]:
‖𝜇𝑟 − 𝜇𝑔‖
2
+ 𝑇𝑟(∑𝑟 + ∑𝑔 − 2√∑𝑟∑𝑔) (17)
where 𝑋𝑟~(𝜇𝑟, ∑𝑟) denotes as mean of 2048-dimensional activation and 𝑋𝑔~(𝜇𝑔, ∑𝑔) denotes the
covariance of 2048-dimensional activation which extracted from pre-trained Inception-v3 model. Our model
transforms data distribution of real 𝔼𝑥~𝑃𝑑𝑎𝑡𝑎(𝑥)
and fake 𝔼𝑥~𝑃𝑧
into 32×32, 64×64, 128×128, and 256×256
image dimension.
3. RESULTS AND DISCUSSION
The training process was performed using cloud instance Intel(R) Xeon(R) CPU @ 2.30 GHz,
high-memory VMs, 2xvCPU, 25 GB RAM, and GPU using NVIDIA P100/T4, peripheral component
interconnect express (PCI Express) 16 GB. The training process consists of three schemas. The first schema
was trained a conventional DCGAN, BEGAN, and DepthwiseGANs using particular dataset distributions.
This schema generates real-like fake images or generate synthetic CXR image. The second schema was
calculated the quality of augmented images divides into several batches of random images. The third schema
was utilized to test whether the augmented images can be classified using particular classification method
such as CNN.
3.1. Data distributions and hyperparameters
This paper represents three kind of datasets such as MNIST dataset, CelebA dataset, and COVID-19
CXR dataset. GAN model trained on 60K MNIST images, 24K CelebA images, and 5.4K CXR images to
generate realistic image synthesis. This proposed approach trained both generator and discriminator network
using Adam with an initial learning rate 𝛼 = 0.0001, 𝛽1 = 0.5, 𝛽2 = 0.999, proportional gain (𝜆𝑘 = 0.7)
[30], varied image transformation 32 to 256. The hyperparameters in Table 1 show that DepthwiseGANs can
propose data augmentation to generate synthetic CXR image using randomized noise inputs.
Based on Table 1, the hyperparameters engages performances among GAN types, especially in
image-to-image translation containing DCGAN, DepthwiseGAN, BEGAN, and DepthwiseBEGAN. Figure 1
shows the generator and discriminator architecture as a convolutional feature extraction which down-sampled
in 4x4 and 8x8. DCGAN has 7.12 million trainable parameters, DepthwiseDCGAN has 0.76 million trainable
parameters, BEGAN has 8.44 million trainable parameters, and DepthwiseBEGAN has 2.23 million trainable
parameters. Additionally, this research compares the hyperparameters combination shown in Table 1 to
analyze the model performance based on generator loss (ℒ𝐺), discriminator loss (ℒ𝐷), and the execution
time.
6. Int J Elec & Comp Eng ISSN: 2088-8708
Realistic image synthesis of COVID-19 chest X-rays using depthwise … (Zendi Iklima)
5449
3.2. DepthwiseBEGAN training performance
Based on (11) until (15), generator loss (ℒ𝐺) and discriminator loss (ℒ𝐷) calculated in epoch=25,
filters=64, image resolution 32x32, random noise=48, and down-sampled size 4x4. Then, Table 2 represent
generator loss (ℒ𝐺) and discriminator loss (ℒ𝐷) of DCGAN and DepthwiseGAN, which consists of
following datasets such as MNIST, CelebA, and CXR dataset. Table 2 represent DCGAN and
DepthwiseGAN training metrics.
Table 1. Hyperparameters
Description Parameters
Model DCGAN [30], DepthwiseGAN [23], BEGAN [29], DepthwiseBEGAN
Dataset MNIST, CelebA [29], CXR
Down-sampled Size 8x8 [29], 4x4
Filters/Batch Size 4, 32, 64
Noises Input 48
Epoch 25
Image Resolution 32x32, 64x64, 128x128 [29], 256x256
Table 2. Loss of DCGAN and DepthwiseGAN using particular datasets
Model Dataset ℒ𝐺 ℒ𝐷 Exec. Time (Minutes)
DCGAN MNIST 5.2513 0.1130 33.1355
CelebA 2.3994 0.7012 37.1142
CXR 1.7703 0.8355 29.7863
DepthwiseGAN MNIST 3.7615 0.2940 24.6837
CelebA 2.5109 0.6646 21.3387
CXR 2.3558 0.6418 17.5
Table 2 shows generator loss (ℒ𝐺), discriminator loss (ℒ𝐷), and execution time of both models
DCGAN and DepthwiseGAN. Generator loss (ℒ𝐺) and discriminator loss (ℒ𝐷) of both models DCGAN and
DepthwiseGAN closely fit, but the execution time of DepthwiseGAN is lower than DCGAN which follows
the number reduction of trainable parameters. Table 3 shows generator loss (ℒ𝐺), discriminator loss (ℒ𝐷),
and execution time of both models DCGAN, DepthwiseGAN, BEGAN, and DepthwiseBEGAN in
epoch=25, filters=(4 and 32), image resolution=(64x64, 128x128, and 256x256), and random noise=48.
Table 3. Loss of DCGAN, DepthwiseGAN, BEGAN, and DepthwiseBEGAN using CXR datasets
Model Filters Image Resolution ℒ𝐺 ℒ𝐷 Exec. Time (Minutes)
DCGAN 32 64x64 1.4017 0.6744 49.7863
DepthwiseGAN 32 64x64 1.5538 0.5188 20.2133
BEGAN 32 64x64 0.0451 0.0885 76.5556
DepthwiseBEGAN 32 64x64 0.0465 0.0811 22.8859
BEGAN 32 128x128 0.0635 0.0789 132.3350
DepthwiseBEGAN 32 128x128 0.0643 0.0799 48.7891
BEGAN 4 256x256 0.0797 0.0989 186.4425
DepthwiseBEGAN 4 256x256 0.0785 0.0965 117.4362
Based on Table 3 DepthwiseBEGAN execution time was faster than BEGAN execution time in
particular filers and image resolution. Meanwhile, generator loss (ℒ𝐺) and discriminator loss (ℒ𝐷) closely fit
in the training stage. DepthwiseBEGAN is able to augment synthetic images with 256x256 pixels. However,
the number of filters was reduced because of GPU limitations.
DepthwiseBEGAN is shown in Figure 3. Generator loss (ℒ𝐺) and discriminator loss (ℒ𝐷) of
DepthwiseBEGAN shown in Figure 3(a), proportional control (𝑘𝑡+1), and convergence global (ℳ𝑔𝑙𝑜𝑏𝑎𝑙) of
DepthwiseBEGAN shown in Figure 3(b), and domain classification loss of discriminator (ℒ𝑐𝑙𝑠
𝑟
), domain
classification loss of generator (ℒ𝑐𝑙𝑠
𝑓
), and reconstruction loss (ℒ𝑟𝑒𝑠) shown in Figure 3(c).
3.3. DepthwiseBEGAN performance measurement
Based on Figure 2 DepthwiseBEGAN performed in filter size=4, image resolution=256x256,
epoch=25, and random noise=48. Image quality of GANs is able to be assessed by measuring FID, which
approximates the distribution of real-to-real images (RR), the distribution of fake-to-real (FR) images, and
the distribution of fake-to-fake (FF) images. Measurement of FID was captured in Table 4 for each 12K
iteration steps in DepthwiseBEGAN training process.
7. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 12, No. 5, October 2022: 5444-5454
5450
(a) (b)
(c)
Figure 3. DepthwiseBEGAN (a) training loss ℒ𝐺 and ℒ𝐷, (b) ℳ𝑔𝑙𝑜𝑏𝑎𝑙 and 𝑘𝑡+1, and
(c) domain loss ℒ𝑐𝑙𝑠
𝑟
, ℒ𝑐𝑙𝑠
𝑓
, and ℒ𝐷
Table 4. FID score of BEGAN and DepthwiseBEGAN
Model Batch FID RR FID FF FID FR
BEGAN 1 4.3866 4.6633 4.4098
DepthwiseBEGAN 4.4674
BEGAN 5 17.2621 12.9109 20.9808
DepthwiseBEGAN 25.9938
BEGAN 20 59.2829 39.3068 69.5281
DepthwiseBEGAN 77.2037
BEGAN 50 104.9618 68.2335 146.6602
DepthwiseBEGAN 157.7203
Table 4 measures FID with several batches containing 1, 5, 20, and 50 images. GANs evaluate by
propagating the distribution of RR or FR or FF using pre-trained Inception-v3. A proportional FID value
while measuring the same image equal to zero. By calculating random image in batch size is 1, FID value of
RR equal to 4.3866, FID value of FR equal to 4.4098 and FID value of FF equal to 4.6633. Synthetic images
augmented by DepthwiseBEGAN is shown in Figure 4. Therefore, Figure 4(a) shows synthetic images of
CXR with normal label which augmented by DepthwiseBEGAN. Figure 4(b) shows synthetic images of
CXR with normal bacteria/virus label which augmented by DepthwiseBEGAN.
Synthetic images of fake generated images 𝐷𝑐𝑙𝑠(𝐺(𝑧)) in CXR dataset has been augmented, which
distributes normal label of train images equal to 12.49K images, the normal label of validation images equal
to 3.75K images, the normal label of test images equal to 7.13K images, the virus label of train images equal
to 12.98K images, the virus label of validation images equal to 3.49K images, the virus label of test images
equal to 6.49K images, the bacteria label of train images equal to 20.01K images, the bacteria label of
validation images equal to 4.99K images, and the bacteria label of test images equal to 7.49K images.
Augmented CXR images trained using several CNN models such as RestNet18 [27] , ResNet-50
[16], and VGG19 [28]. The real and fake generated data distribution with input resolution 128x128 trained
using Adam optimizer with an initial learning rate 𝛼 = 0.0001, 𝛽1 = 0.5, 𝛽2 = 0.999 [30] and 50 iterations.
In order to represent the performance of particular CNN models, this paper assigned the accuracy, specificity,
sensitivity, positive predictive value (PPV), and negative predictive value (NPV), can be formalized as [27]:
8. Int J Elec & Comp Eng ISSN: 2088-8708
Realistic image synthesis of COVID-19 chest X-rays using depthwise … (Zendi Iklima)
5451
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = (𝑇𝑃 + 𝑇𝑁)/(𝑇𝑃 + 𝑇𝑁 + 𝐹𝑃 + 𝐹𝑁) (18)
𝑆𝑒𝑛𝑠𝑖𝑡𝑖𝑣𝑖𝑡𝑦 = (𝑇𝑃)/(𝑇𝑃 + 𝐹𝑁) (19)
𝑆𝑝𝑒𝑐𝑖𝑓𝑖𝑐𝑖𝑡𝑦 = (𝑇𝑁)/(𝑇𝑁 + 𝐹𝑃) (20)
𝑃𝑃𝑉 = (𝑇𝑃)/(𝑇𝑃 + 𝐹𝑃) (21)
𝑁𝑃𝑉 = (𝑇𝑁)/(𝑇𝑁 + 𝐹𝑁). (22)
Based on (19) until (22), sensitivity and specificity are defined for a domain of binary classification.
Sensitivity determines whether the ‘virus’ label meets the condition in TP divided by TP and FN. Specificity
determines the virus label does not meet condition means FP divided by FP and TN. Positive predictive value
(PPV) is determines the ‘virus’ label meets condition of positive direction means TP divided by TP and FP.
Negative predictive value (NPV) is determines the ‘virus’ label meets condition of negative direction means
TN divided by TN and FN. Figure 5 shows CNN models utilized to classify the CXR images based on the
following labels, namely normal label, bacteria label, and virus label. Figure 5(a) represents CNN training
accuracy using real CXR datasets and Figure 5(b) represents CNN training accuracy using generated or fake
CXR datasets.
Figure 5 represent the CNN training accuracy using real and fake CXR dataset. Based on (18) until
(22) the confusion matrix has calculated. The confusion matrix of the following CNN models was calculated
using 100 images from particular sources which shown in Table 5.
(a) (b)
Figure 4. Synthetic images augmented by DepthwiseBEGAN (a) normal label and (b) bacteria/virus label
(a) (b)
Figure 5. CNN training accuracy using (a) real and (b) fake CXR images
9. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 12, No. 5, October 2022: 5444-5454
5452
Table 5. CNN Confusion matrix
Data Dist. Models Labels Sensitivity (%) Specificity (%) PPV (%) NPV (%)
Real
𝐷𝑐𝑙𝑠(𝑥)
GoogleNet Normal 64.29 75.00 50.00 84.38
Virus 43.75 64.71 36.84 70.97
Bacteria 40.00 83.33 64.54 67.57
Fake Gen.
𝐷𝑐𝑙𝑠(𝐺(𝑧))
GoogleNet Normal 100.00 97.22 93.33 100.00
Virus 93.75 94.12 88.24 96.97
Bacteria 90.00 100.00 100.00 93.75
Real
𝐷𝑐𝑙𝑠(𝑥)
ResNet-18 Normal 78.57 88.88 73.33 81.43
Virus 68.75 76.47 57.89 83.87
Bacteria 70.00 93.33 87.5 82.35
Fake Gen.
𝐷𝑐𝑙𝑠(𝐺(𝑧))
ResNet-18 Normal 100.00 97.22 93.33 100.00
Virus 100.00 97.06 94.12 100.00
Bacteria 90.00 100.00 100.00 93.75
Real
𝐷𝑐𝑙𝑠(𝑥)
ResNet-34 Normal 80.00 88.57 75.0 91.18
Virus 72.22 78.13 65.00 83.33
Bacteria 70.59 93.93 85.72 86.11
Fake Gen.
𝐷𝑐𝑙𝑠(𝐺(𝑧))
ResNet-34 Normal 100.00 97.22 93.33 100.00
Virus 100.00 96.97 94.44 100.00
Bacteria 89.47 100.00 100.00 93.93
Real
𝐷𝑐𝑙𝑠(𝑥)
ResNet-50 Normal 61.54 89.19 66.67 86.84
Virus 62.50 76.48 55.55 81.25
Bacteria 76.19 86.21 80.00 83.33
Fake Gen.
𝐷𝑐𝑙𝑠(𝐺(𝑧))
ResNet-50 Normal 100.00 91.89 81.25 100.00
Virus 93.75 97.06 93.75 97.06
Bacteria 85.71 100.00 100.00 90.63
4. CONCLUSION
One of the most common procedures to detect COVID-19 by chest screening using X-ray
technology, CXR imaging accurately identifies whether a patient is infected with the COVID-19 virus or not.
A computational approach proposed, such as CNN can classify CXR images within three labels: normal
label, bacteria label, and COVID-19 label. Covid-Net trained the highest CXR images (14K images) to
classify CXR images with 93% accuracy. Several methods utilized small CXR images to be trained on less
than 10K images. Image synthesis method proposed to augment CXR images of COVID-19 within the goals
to increase classification method accuracy.
In resolution 64×64, DCGAN trained to augmented CXR image synthesis within generator loss
equal to 1.4017, discriminator loss equal to 0.6744, and execution time equal to 49.7863 minutes.
DepthwiseGAN trained to augmented CXR image synthesis within generator loss equal to 1.5538,
discriminator loss equal to 0.5188, and execution time equal to 20.2133 minutes. DepthwiseGANs have
shortened execution time for a better-generated image by the discriminator. Quality of DepthwiseGANs
improved by using the encoder-decoder model of GANs, namely BEGAN. BEGAN trained to augmented
CXR image synthesis within generator loss equal to 0.0451, discriminator loss equal to 0.0885, and execution
time equal to 76.5556 minutes. DepthwiseBEGAN trained to augmented CXR image synthesis within generator
loss equal to 0.0465, discriminator loss equal to 0.0811, and execution time equal to 22.8859 minutes.
FID of DepthwiseBEGAN measured by comparing the number of image batches and the source of
the image. Measurement of FID using one random image calculated fake-to-fake (FF) equal to 4.6633,
real-to-real (RR) equal to 4.3866, and fake-to-real (FR) equal to 4.4674. Furthermore, generated
DepthwiseBEGAN synthetic images improve 22.59% accuracy of conventional CNN models.
ACKNOWLEDGEMENTS
The author expresses gratitude to the Electrical Engineering Department and Research Center of
Universitas Mercu Buana, Jakarta, Indonesia. Under the support, the authors have been completed this
research.
REFERENCES
[1] D. Singh, V. Kumar, Vaishali, and M. Kaur, “Classification of COVID-19 patients from chest CT images using multi-objective
differential evolution–based convolutional neural networks,” European Journal of Clinical Microbiology & Infectious Diseases,
vol. 39, no. 7, pp. 1379–1389, Jul. 2020, doi: 10.1007/s10096-020-03901-z.
[2] T. Mahmud, M. A. Rahman, and S. A. Fattah, “CovXNet: A multi-dilation convolutional neural network for automatic COVID-
19 and other pneumonia detection from chest X-ray images with transferable multi-receptive feature optimization,” Computers in
Biology and Medicine, vol. 122, Jul. 2020, doi: 10.1016/j.compbiomed.2020.103869.
[3] WHO, “WHO coronavirus (COVID-19) dashboard," WHO Health Emergency Dashboard. https://covid19.who.int/ (accessed Apr.
10. Int J Elec & Comp Eng ISSN: 2088-8708
Realistic image synthesis of COVID-19 chest X-rays using depthwise … (Zendi Iklima)
5453
01, 2021).
[4] T. Ozturk, M. Talo, E. A. Yildirim, U. B. Baloglu, O. Yildirim, and U. Rajendra Acharya, “Automated detection of COVID-19
cases using deep neural networks with X-ray images,” Computers in Biology and Medicine, vol. 121, Jun. 2020, doi:
10.1016/j.compbiomed.2020.103792.
[5] S. Motamed, P. Rogalla, and F. Khalvati, “RANDGAN: Randomized generative adversarial network for detection of COVID-19
in chest X-ray,” Scientific Reports, vol. 11, no. 1, Dec. 2021, doi: 10.1038/s41598-021-87994-2.
[6] S. Hassantabar, M. Ahmadi, and A. Sharifi, “Diagnosis and detection of infected tissue of COVID-19 patients based on lung x-ray
image using convolutional neural network approaches,” Chaos, Solitons & Fractals, vol. 140, Nov. 2020, doi:
10.1016/j.chaos.2020.110170.
[7] A. A. Ardakani, A. R. Kanafi, U. R. Acharya, N. Khadem, and A. Mohammadi, “Application of deep learning technique to
manage COVID-19 in routine clinical practice using CT images: Results of 10 convolutional neural networks,” Computers in
Biology and Medicine, vol. 121, Jun. 2020, doi: 10.1016/j.compbiomed.2020.103795.
[8] J. Zhao, X. He, X. Yang, Y. Zhang, S. Zhang, and P. Xie, “COVID-CT-Dataset: A CT image dataset about COVID-19,” arXiv,
pp. 1–14, 2020.
[9] A. Narin, C. Kaya, and Z. Pamuk, “Automatic detection of coronavirus disease (COVID-19) using X-ray images and deep
convolutional neural networks,” Pattern Analysis and Applications, vol. 24, no. 3, pp. 1207–1220, Aug. 2021, doi:
10.1007/s10044-021-00984-y.
[10] B. Sekeroglu and I. Ozsahin, “Detection of COVID-19 from chest X-ray images using convolutional neural networks,” SLAS
Technology, vol. 25, no. 6, pp. 553–565, 2020, doi: 10.1177/2472630320958376.
[11] Y. Zhong, “Using deep convolutional neural networks to diagnose COVID-19 from chest X-ray images,” arXiv: 2007.09695, Jul.
2020.
[12] A. Waheed, M. Goyal, D. Gupta, A. Khanna, F. Al-Turjman, and P. R. Pinheiro, “CovidGAN: data augmentation using auxiliary
classifier GAN for improved covid-19 detection,” IEEE Access, vol. 8, pp. 91916–91923, 2020, doi:
10.1109/ACCESS.2020.2994762.
[13] M. Rahimzadeh and A. Attar, “A modified deep convolutional neural network for detecting COVID-19 and pneumonia from
chest X-ray images based on the concatenation of Xception and ResNet50V2,” Informatics in Medicine Unlocked, vol. 19, 2020,
doi: 10.1016/j.imu.2020.100360.
[14] E. E.-D. Hemdan, M. A. Shouman, and M. E. Karar, “COVIDX-Net: A framework of deep learning classifiers to diagnose
COVID-19 in X-Ray images,” arXiv preprint arXiv:2003.11055., Mar. 2020.
[15] M. A. Zulkifley, S. R. Abdani, and N. H. Zulkifley, “COVID-19 screening using a lightweight convolutional neural network with
generative adversarial network data augmentation,” Symmetry, vol. 12, no. 9, Sep. 2020, doi: 10.3390/sym12091530.
[16] L. Wang, Z. Q. Lin, and A. Wong, “COVID-Net: a tailored deep convolutional neural network design for detection of COVID-19
cases from chest X-ray images,” Scientific Reports, vol. 10, no. 1, Dec. 2020, doi: 10.1038/s41598-020-76550-z.
[17] A. I. Khan, J. L. Shah, and M. M. Bhat, “CoroNet: A deep neural network for detection and diagnosis of COVID-19 from chest
X-ray images,” Computer Methods and Programs in Biomedicine, vol. 196, Nov. 2020, doi: 10.1016/j.cmpb.2020.105581.
[18] N. E. M. Khalifa, M. H. N. Taha, A. E. Hassanien, and S. Elghamrawy, “Detection of coronavirus (COVID-19) associated
pneumonia based on generative adversarial networks and a fine-tuned deep transfer learning model using chest X-ray dataset,”
arXiv: 2004.01184, Apr. 2020.
[19] X. Xu et al., “A deep learning system to screen novel coronavirus disease 2019 pneumonia,” Engineering, vol. 6, no. 10,
pp. 1122–1129, Oct. 2020, doi: 10.1016/j.eng.2020.04.010.
[20] E. Ihsanto, K. Ramli, D. Sudiana, and T. S. Gunawan, “Fast and accurate algorithm for ECG authentication using residual
depthwise separable convolutional neural networks,” Applied Sciences, vol. 10, no. 9, May 2020, doi: 10.3390/app10093304.
[21] N. K. Chowdhury, M. M. Rahman, and M. A. Kabir, “PDCOVIDNet: a parallel-dilated convolutional neural network architecture
for detecting COVID-19 from chest X-ray images,” Health Information Science and Systems, vol. 8, no. 1, Dec. 2020, doi:
10.1007/s13755-020-00119-3.
[22] K. KC, Z. Yin, M. Wu, and Z. Wu, “Depthwise separable convolution architectures for plant disease classification,” Computers
and Electronics in Agriculture, vol. 165, Oct. 2019, doi: 10.1016/j.compag.2019.104948.
[23] M. Ngxande, J.-R. Tapamo, and M. Burke, “DepthwiseGANs: Fast training generative adversarial networks for realistic image
synthesis,” in 2019 Southern African Universities Power Engineering Conference/Robotics and Mechatronics/Pattern
Recognition Association of South Africa (SAUPEC/RobMech/PRASA), Jan. 2019, pp. 111–116, doi:
10.1109/RoboMech.2019.8704766.
[24] F. Munawar, S. Azmat, T. Iqbal, C. Gronlund, and H. Ali, “Segmentation of lungs in chest X-ray image using generative
adversarial networks,” IEEE Access, vol. 8, pp. 153535–153545, 2020, doi: 10.1109/ACCESS.2020.3017915.
[25] Y. Jiang, H. Chen, M. Loew, and H. Ko, “COVID-19 CT image synthesis with a conditional generative adversarial network,”
IEEE Journal of Biomedical and Health Informatics, vol. 25, no. 2, pp. 441–452, Feb. 2021, doi: 10.1109/JBHI.2020.3042523.
[26] Y. Li, N. Xiao, and W. Ouyang, “Improved boundary equilibrium generative adversarial networks,” IEEE Access, vol. 6,
pp. 11342–11348, 2018, doi: 10.1109/ACCESS.2018.2804278.
[27] A. Abbas, M. M. Abdelsamea, and M. M. Gaber, “Classification of COVID-19 in chest X-ray images using DeTraC deep
convolutional neural network,” Applied Intelligence, vol. 51, no. 2, pp. 854–864, Feb. 2021, doi: 10.1007/s10489-020-01829-7.
[28] I. D. Apostolopoulos and T. A. Mpesiana, “Covid-19: automatic detection from X-ray images utilizing transfer learning with
convolutional neural networks,” Physical and Engineering Sciences in Medicine, vol. 43, no. 2, pp. 635–640, Jun. 2020, doi:
10.1007/s13246-020-00865-4.
[29] B. Huang, W. Chen, X. Wu, C.-L. Lin, and P. N. Suganthan, “High-quality face image generated with conditional boundary
equilibrium generative adversarial networks,” Pattern Recognition Letters, vol. 111, pp. 72–79, Aug. 2018, doi:
10.1016/j.patrec.2018.04.028.
[30] A. Radford, L. Metz, and S. Chintala, “Unsupervised representation learning with deep convolutional generative adversarial
networks,” 4th International Conference on Learning Representations, ICLR 2016 - Conference Track Proceedings, pp. 1–16,
2016.
11. ISSN: 2088-8708
Int J Elec & Comp Eng, Vol. 12, No. 5, October 2022: 5444-5454
5454
BIOGRAPHIES OF AUTHORS
Zendi Iklima received the M.Sc. degree in Software Engineering from Beijing
Institute of Technology in 2018. His research interests include robotics, deep learning, and
cloud computing. He has an experience as the Chief Technology Officer of Diaspora Connect
Indonesia. Indonesian Diaspora Connect is one of the controversial start-up companies in
2018. The idea of connecting Indonesia is to help the Indonesian government find out about
citizens who are studying or living abroad which integrates with artificial intelligence. Based
on the idea, he achieved 1st place in ALE Hackathon Competition in 2018 which held by
Alcatel-Lucent (Jakarta). He can be contacted at email: zendi.iklima@mercubuana.ac.id.
Trie Maya Kadarina received her bachelor’s degree in Electrical Engineering
from Institut Teknologi Nasional in 2001. She received her master’s degree in Biomedical
Engineering from the Department of Electrical Engineering, Institut Teknologi Bandung, in
2005. Currently, she is a lecturer in the Department of Electrical Engineering at Universitas
Mercu Buana. Her research interests include biomedical instrumentation, electronics and
control systems, machine learning, and internet of things. She can be contacted at email:
trie.maya@mercubuana.ac.id.
Eko Ihsanto received the undergraduate and Doctorate degree from Electrical
Engineering Department University of Indonesia. His research interests include embedded
design, signal processing and deep learning. He can be contacted at email:
eko.ihsanto@mercubuana.ac.id.