Neovascularization is a new vessel in the retina beside the artery-venous. Neovascularization can appear on the optic disk and the entire surface of the retina. The retina categorized in Proliferative Diabetic Retinopathy (PDR) if it has neovascularization. PDR is a severe Diabetic Retinopathy (DR). An image classification system between normal and neovascularization is here presented. The classification using Convolutional Neural Network (CNN) model and classification method such as Support Vector Machine, k-Nearest Neighbor, Naïve Bayes classifier, Discriminant Analysis, and Decision Tree. By far, there are no data patches of neovascularization for the process of classification. Data consist of normal, New Vessel on the Disc (NVD) and New Vessel Elsewhere (NVE). Images are taken from 2 databases, MESSIDOR and Retina Image Bank. The patches are made from a manual crop on the image that has been marked by experts as neovascularization. The dataset consists of 100 data patches. The test results using three scenarios obtained a classification accuracy of 90%-100% with linear loss cross validation 0%-26.67%. The test performs using a single Graphical Processing Unit (GPU).
A Novel Approach for Diabetic Retinopthy ClassificationIJERA Editor
Sustainable Diabetic Mellitus may lead to several complications towards patients. One of the complications is
diabetic retinopathy. Diabetic retinopathy is the type of complication towards the retinal and interferes with
patient’s sight. Medical examination toward patients with diabetic retinopathy is observed directly through
retinal images using fundus camera. Diabetic retinopathy is classified into four classes based on severity, which
are: normal, non-proliferative diabetic retinopathy (NPDR), proliferative diabetic retinopathy (PDR), and
macular edema (ME). The aim of this research is to develop a method which can be used to classify the level of
severity of diabetic retinopathy based on patient’s retinal images. Seven texture features were extracted from
retinal images using gray level co-occurence matrix three dimensional method (3D-GLCM). These features are
maximum probability, correlation, contrast, energy, homogeneity, and entropy; subsequently trained using
Levenberg-Marquardt Backpropagation Neural Network (LMBP). This study used 600 data of patient’s retinal
images, consist of 450 data retinal images for training and 150 data retinal images for testing. Based on the result
of this test, the method can classify the severity of diabetic retinopathy with sensitivity of 97.37%, specificity of
75% and accuracy of 91.67%
Early detection of glaucoma through retinal nerve fiber layer analysis using ...eSAT Publishing House
This document summarizes a research paper that proposes a new method for classifying healthy retinal nerve fiber layer (RNFL), medium RNFL loss, and severe RNFL loss using fractal dimension and texture feature analysis of color fundus images. The method extracts texture and fractal dimension features from regions of RNFL in fundus images. It finds that the features are correlated and the correlation is highest for healthy RNFL and lowest for severe RNFL loss. The features can potentially help detect glaucoma at early stages through classification of RNFL health status.
Classification and Segmentation of Glaucomatous Image Using Probabilistic Neu...ijsrd.com
The gradual visual field loss and there is a characteristic type of damage to the retinal nerve fiber layer associated with the progression of the disease glaucoma. Texture features within images are actively pursued for accurate and efficient glaucoma classification. Energy distribution over wavelet subband is applied to find these important texture features. In this paper, we investigate the discriminatory potential of wavelet features obtained from the Daubechies (db3), symlets (sym3), and biorthogonal (bio3.3, bio3.5, and bio3.7) wavelet filters. We propose a novel technique to extract energy signatures obtained using 2-D discrete wavelet transform, and subject these signatures to different feature ranking and feature selection strategies. Here my project aims at the use of Probabilistic Neural Network (PNN), Fuzzy C-means (FCM) and K-means helps for the detection of glaucoma disease. For this, fuzzy c-means clustering algorithm and k-means algorithm is used. Fuzzy c-means results faster and reliably good clustering when compare to k-means.
Automated Diagnosis of Glaucoma using Haralick Texture FeaturesIOSR Journals
Abstract : Glaucoma is the second leading cause of blindness worldwide. It is a disease in which fluid
pressure in the eye increases continuously, damaging the optic nerve and causing vision loss. Computational
decision support systems for the early detection of glaucoma can help prevent this complication. The retinal
optic nerve fibre layer can be assessed using optical coherence tomography, scanning laser polarimetry, and
Heidelberg retina tomography scanning methods. In this paper, we present a novel method for glaucoma
detection using an Haralick Texture Features from digital fundus images. K Nearest Neighbors (KNN)
classifiers are used to perform supervised classification. Our results demonstrate that the Haralick Texture
Features has Database and classification parts, in Database the image has been loaded and Gray Level Cooccurrence
Matrix (GLCM) and thirteen haralick features are combined to extract the image features, performs
better than the other classifiers and correctly identifies the glaucoma images with an accuracy of more than
98%. The impact of training and testing is also studied to improve results. Our proposed novel features are
clinically significant and can be used to detect glaucoma accurately.
Keywords: Glaucoma, Haralick Texture features, KNN Classifiers, Feature Extraction
RECOGNITION OF CDNA MICROARRAY IMAGE USING FEEDFORWARD ARTIFICIAL NEURAL NETWORKijaia
The complementary DNA (cDNA) sequence considered the magic biometric technique for personal identification. Microarray image processing used for the concurrent genes identification. In this paper, we present a new method for cDNA recognition based on the artificial neural network (ANN). We have segmented the location of the spots in a cDNA microarray. Thus, a precise localization and segmenting of a spot are essential to obtain a more exact intensity measurement, leading to a more accurate gene expression measurement. The segmented cDNA microarray image resized and used as an input for the
proposed artificial neural network. For matching and recognition, we have trained the artificial neural
network. Recognition results are given for the galleries of cDNA sequences . The numerical results show
that, the proposed matching technique is an effective in the cDNA sequences process. The experimental
results of our matching approach using different databases shows that, the proposed technique is an effective matching performance.
Extraction of spots in dna microarrays using genetic algorithmsipij
DNA microarray technology is an eminent tool for genomic studies. Accurate extraction of spots is a
crucial issue since biological interpretations depend on it. The image analysis starts with the formation of
grid, which is a laborious process requiring human intervention. This paper presents a method for optimal
search of the spots using genetic algorithm without formation of grid. The information of every spot is
extracted by obtaining a pixel belonging to that spot. The method developed selects pixels of high intensity
in the image, thereby spot is recognized. The objective function, which is implemented, helps in identifying
the exact pixel. The algorithm is applied to different sizes of sub images and features of the spots are
obtained. It is found that there is a tradeoff between accuracy in the number of spots identified and time
required for processing the image. Segmentation process is independent of shape, size and location of the
spots. Background estimation is one step process as both foreground and complete spot are realized.
Coding of the proposed algorithm is developed in MATLAB-7 and applied to cDNA microarray images.
This approach provides reliable results for identification of even low intensity spots and elimination of
spurious spots.
A Wavelet Based Automatic Segmentation of Brain Tumor in CT Images Using Opti...CSCJournals
This document summarizes a research paper that proposes a new method for automatically segmenting brain tumors in CT images. The method uses a combination of wavelet-based texture features extracted from discrete wavelet transformed sub-bands. These features are optimized using genetic algorithms and used to train probabilistic neural network and feedforward neural network classifiers to segment tumors. The proposed method is evaluated on brain CT images and shown to outperform existing segmentation methods.
Identification of Focal Cortical Dysplasia (FCD) can be difficult due to the subtle MRI changes. Though sequences like FLAIR (fluid attenuated inversion recovery) can detect a large majority of these lesions, there are smaller lesions without signal changes that can easily go unnoticed by the naked eye. The aim of this study is to improve the visibility of Focal Cortical Dysplasia lesions in the T1 weighted brain MRI images. In the proposed method, we used a complex diffusion based approach for calculating the FCD affected areas.
A Novel Approach for Diabetic Retinopthy ClassificationIJERA Editor
Sustainable Diabetic Mellitus may lead to several complications towards patients. One of the complications is
diabetic retinopathy. Diabetic retinopathy is the type of complication towards the retinal and interferes with
patient’s sight. Medical examination toward patients with diabetic retinopathy is observed directly through
retinal images using fundus camera. Diabetic retinopathy is classified into four classes based on severity, which
are: normal, non-proliferative diabetic retinopathy (NPDR), proliferative diabetic retinopathy (PDR), and
macular edema (ME). The aim of this research is to develop a method which can be used to classify the level of
severity of diabetic retinopathy based on patient’s retinal images. Seven texture features were extracted from
retinal images using gray level co-occurence matrix three dimensional method (3D-GLCM). These features are
maximum probability, correlation, contrast, energy, homogeneity, and entropy; subsequently trained using
Levenberg-Marquardt Backpropagation Neural Network (LMBP). This study used 600 data of patient’s retinal
images, consist of 450 data retinal images for training and 150 data retinal images for testing. Based on the result
of this test, the method can classify the severity of diabetic retinopathy with sensitivity of 97.37%, specificity of
75% and accuracy of 91.67%
Early detection of glaucoma through retinal nerve fiber layer analysis using ...eSAT Publishing House
This document summarizes a research paper that proposes a new method for classifying healthy retinal nerve fiber layer (RNFL), medium RNFL loss, and severe RNFL loss using fractal dimension and texture feature analysis of color fundus images. The method extracts texture and fractal dimension features from regions of RNFL in fundus images. It finds that the features are correlated and the correlation is highest for healthy RNFL and lowest for severe RNFL loss. The features can potentially help detect glaucoma at early stages through classification of RNFL health status.
Classification and Segmentation of Glaucomatous Image Using Probabilistic Neu...ijsrd.com
The gradual visual field loss and there is a characteristic type of damage to the retinal nerve fiber layer associated with the progression of the disease glaucoma. Texture features within images are actively pursued for accurate and efficient glaucoma classification. Energy distribution over wavelet subband is applied to find these important texture features. In this paper, we investigate the discriminatory potential of wavelet features obtained from the Daubechies (db3), symlets (sym3), and biorthogonal (bio3.3, bio3.5, and bio3.7) wavelet filters. We propose a novel technique to extract energy signatures obtained using 2-D discrete wavelet transform, and subject these signatures to different feature ranking and feature selection strategies. Here my project aims at the use of Probabilistic Neural Network (PNN), Fuzzy C-means (FCM) and K-means helps for the detection of glaucoma disease. For this, fuzzy c-means clustering algorithm and k-means algorithm is used. Fuzzy c-means results faster and reliably good clustering when compare to k-means.
Automated Diagnosis of Glaucoma using Haralick Texture FeaturesIOSR Journals
Abstract : Glaucoma is the second leading cause of blindness worldwide. It is a disease in which fluid
pressure in the eye increases continuously, damaging the optic nerve and causing vision loss. Computational
decision support systems for the early detection of glaucoma can help prevent this complication. The retinal
optic nerve fibre layer can be assessed using optical coherence tomography, scanning laser polarimetry, and
Heidelberg retina tomography scanning methods. In this paper, we present a novel method for glaucoma
detection using an Haralick Texture Features from digital fundus images. K Nearest Neighbors (KNN)
classifiers are used to perform supervised classification. Our results demonstrate that the Haralick Texture
Features has Database and classification parts, in Database the image has been loaded and Gray Level Cooccurrence
Matrix (GLCM) and thirteen haralick features are combined to extract the image features, performs
better than the other classifiers and correctly identifies the glaucoma images with an accuracy of more than
98%. The impact of training and testing is also studied to improve results. Our proposed novel features are
clinically significant and can be used to detect glaucoma accurately.
Keywords: Glaucoma, Haralick Texture features, KNN Classifiers, Feature Extraction
RECOGNITION OF CDNA MICROARRAY IMAGE USING FEEDFORWARD ARTIFICIAL NEURAL NETWORKijaia
The complementary DNA (cDNA) sequence considered the magic biometric technique for personal identification. Microarray image processing used for the concurrent genes identification. In this paper, we present a new method for cDNA recognition based on the artificial neural network (ANN). We have segmented the location of the spots in a cDNA microarray. Thus, a precise localization and segmenting of a spot are essential to obtain a more exact intensity measurement, leading to a more accurate gene expression measurement. The segmented cDNA microarray image resized and used as an input for the
proposed artificial neural network. For matching and recognition, we have trained the artificial neural
network. Recognition results are given for the galleries of cDNA sequences . The numerical results show
that, the proposed matching technique is an effective in the cDNA sequences process. The experimental
results of our matching approach using different databases shows that, the proposed technique is an effective matching performance.
Extraction of spots in dna microarrays using genetic algorithmsipij
DNA microarray technology is an eminent tool for genomic studies. Accurate extraction of spots is a
crucial issue since biological interpretations depend on it. The image analysis starts with the formation of
grid, which is a laborious process requiring human intervention. This paper presents a method for optimal
search of the spots using genetic algorithm without formation of grid. The information of every spot is
extracted by obtaining a pixel belonging to that spot. The method developed selects pixels of high intensity
in the image, thereby spot is recognized. The objective function, which is implemented, helps in identifying
the exact pixel. The algorithm is applied to different sizes of sub images and features of the spots are
obtained. It is found that there is a tradeoff between accuracy in the number of spots identified and time
required for processing the image. Segmentation process is independent of shape, size and location of the
spots. Background estimation is one step process as both foreground and complete spot are realized.
Coding of the proposed algorithm is developed in MATLAB-7 and applied to cDNA microarray images.
This approach provides reliable results for identification of even low intensity spots and elimination of
spurious spots.
A Wavelet Based Automatic Segmentation of Brain Tumor in CT Images Using Opti...CSCJournals
This document summarizes a research paper that proposes a new method for automatically segmenting brain tumors in CT images. The method uses a combination of wavelet-based texture features extracted from discrete wavelet transformed sub-bands. These features are optimized using genetic algorithms and used to train probabilistic neural network and feedforward neural network classifiers to segment tumors. The proposed method is evaluated on brain CT images and shown to outperform existing segmentation methods.
Identification of Focal Cortical Dysplasia (FCD) can be difficult due to the subtle MRI changes. Though sequences like FLAIR (fluid attenuated inversion recovery) can detect a large majority of these lesions, there are smaller lesions without signal changes that can easily go unnoticed by the naked eye. The aim of this study is to improve the visibility of Focal Cortical Dysplasia lesions in the T1 weighted brain MRI images. In the proposed method, we used a complex diffusion based approach for calculating the FCD affected areas.
Blood vessel segmentation in fundus imagesIRJET Journal
This document summarizes a research paper on blood vessel segmentation in fundus images using random forest classification. The proposed method uses six steps: 1) spatial calibration of fundus images, 2) image preprocessing including illumination equalization, denoising and contrast equalization, 3) optic disc removal, 4) candidate lesion extraction, 5) dynamic shape feature extraction, and 6) random forest classification of lesions. The method was tested on public Diaretdb1 fundus image database and achieved accurate segmentation and detection of microaneurysms and hemorrhages for early diagnosis of diabetic retinopathy.
Design issues and challenges of reliable and secure transmission of medical i...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Computer Aided Diagnosis using Margin and Posterior Acoustic Featuresfor Brea...TELKOMNIKA JOURNAL
Breast cancer is the most commonly diagnosed cancer among females worldwide. Computer aided diagnosis (CAD) was developed to assist radiologists in detecting and evaluating nodules so it can improve diagnostic accuracy, avoid unnecessary biopsies, reduce anxiety and control costs. This research proposes a method of CAD for breast ultrasound images based on margin and posterior acoustic features. It consists of preprocessing, segmentation using active contour without edge (ACWE) and morphological, feature extraction and classification. Texture and geometry analysis was used to determine the characteristics of the posterior acoustic and margin nodules. Support vector machines (SVM) provided better performance than multilayer perceptron (MLP). The performance of proposed method achieved the accuracy of 91.35%, sensitivity of 92.00%, specificity of 89.66%, PPV of 95.83%, NPV of 81.26% and Kappa of 0.7915. These results indicate that the developed CAD has potential to be implemented for diagnosis of breast cancer using ultrasound images.
Effective segmentation of sclera, iris and pupil in noisy eye imagesTELKOMNIKA JOURNAL
In today’s sensitive environment, for personal authentication, iris recognition is the most attentive
technique among the various biometric technologies. One of the key steps in the iris recognition system is
the accurate iris segmentation from its surrounding noises including pupil and sclera of a captured
eye-image. In our proposed method, initially input image is preprocessed by using bilateral filtering.
After the preprocessing of images contour based features such as, brightness, color and texture features
are extracted. Then entropy is measured based on the extracted contour based features to effectively
distinguishing the data in the images. Finally, the convolution neural network (CNN) is used for
the effective sclera, iris and pupil parts segmentations based on the entropy measure. The proposed
results are analyzed to demonstrate the better performance of the proposed segmentation method than
the existing methods.
Multistage Classification of Alzheimer’s DiseaseIJLT EMAS
Alzheimer’s disease is a type of dementia that destroys
memory and other mental functions. During the progression of
the disease certain proteins called plaques and tangles get
deposited in hippocampus which is located in the temporal lobe
of brain. The disease is not a normal part of aging and gets
worsen over time. Medical imaging techniques like Magnetic
Resonance Imaging (MRI), Computed Tomography (CT) and
Positron Emission Tomography (PET) play significant role in the
disease diagnosis. In this paper, we propose a method for
classifying MRI into Normal Control (NC), Mild Cognitive
Impairment (MCI) and Alzheimer’s Disease(AD). An overall
outline of the methodology includes textural feature extraction,
feature reduction process and classification of the images into
various stages. Classification has been performed with three
classifiers namely Support Vector Machine (SVM), Artificial
Neural Network (ANN) and k-Nearest Neighbours (k-NN)
Transfer learning with multiple pre-trained network for fundus classificationTELKOMNIKA JOURNAL
Transfer learning (TL) is a technique of reuse and modify a pre-trained network.
It reuses feature extraction layer at a pre-trained network. A target domain in TL
obtains the features knowledge from the source domain. TL modified
classification layer at a pre-trained network. The target domain can do new tasks
according to a purpose. In this article, the target domain is fundus image
classification includes normal and neovascularization. Data consist of
100 patches. The comparison of training and validation data was 70:30.
The selection of training and validation data is done randomly. Steps of TL i.e
load pre-trained networks, replace final layers, train the network, and assess
network accuracy. First, the pre-trained network is a layer configuration of
the convolutional neural network architecture. Pre-trained network used are
AlexNet, VGG16, VGG19, ResNet50, ResNet101, GoogLeNet, Inception-V3,
InceptionResNetV2, and squeezenet. Second, replace the final layer is to replace
the last three layers. They are fully connected layer, softmax, and output layer.
The layer is replaced with a fully connected layer that classifies according to
number of classes. Furthermore, it's followed by a softmax and output layer that
matches with the target domain. Third, we trained the network. Networks were
trained to produce optimal accuracy. In this section, we use gradient descent
algorithm optimization. Fourth, assess network accuracy. The experiment results
show a testing accuracy between 80% and 100%.
Fundus Image Classification Using Two Dimensional Linear Discriminant Analysi...INFOGAIN PUBLICATION
It is constructed in this study a classification system of diabetic retinopathy fundus image. The system consists of two phases: training and testing. Each stage consists of preprocessing, segmentation, feature extraction and classification. The tested image comes from the MESSIDOR dataset which has a total of 100 images. The number of classes to be classified consists of four classes with each class consists of 25 images. The classes are normal, mild, moderate and severe of Diabetic retinopathy. In this study, the level of preprocessing uses grayscales green channel, Wavelet Haar, Gaussian filter and Contrast Limited Adaptive Histogram Equalization. The level of segmentation uses masking as a process of doing the subtracting operation of between the original image and the masking image. The purpose of the masking is to split between the object and the background. The feature extraction uses Two Dimensional Linear Discriminant Analysis (2DLDA). The classification uses Support Vector Machine (SVM). The test results of some scenarios show that the highest percentage of accuration of the system is up to 90%.
This document discusses feature extraction techniques for Alzheimer's disease diagnosis using PET scans. It examines voxel intensity, scale space representation through Gaussian pyramids, and a local variance operator to capture local contrast. Voxel intensity provides direct measurement of FDG uptake but neighboring voxels are highly correlated. The scale space representation reduces redundancy through smoothing and subsampling. The local variance operator measures variance within a 3D neighborhood to estimate local contrast, allowing features at different scales by varying neighborhood radius. These feature extraction methods aim to effectively represent relevant information from PET scans to improve computer-aided Alzheimer's diagnosis.
Brain Tumor Detection using MRI ImagesYogeshIJTSRD
Brain tumor segmentation is a very important task in medical image processing. Early diagnosis of brain tumors plays a crucial role in improving treatment possibilities and increases the survival rate of the patients. For the study of tumor detection and segmentation, MRI Images are very useful in recent years. One of the foremost crucial tasks in any brain tumor detection system is that the detachment of abnormal tissues from normal brain tissues. Because of MRI Images, we will detect the brain tumor. Detection of unusual growth of tissues and blocks of blood within the system is seen in an MRI Imaging. Brain tumor detection using MRI images may be a challenging task due to the brains complex structure.In this paper, we propose an image segmentation method to detect tumors from MRI images using an interface of GUI in MATLAB. The method of distinguishing brain tumors through MRI images is often sorted into four sections of image processing as pre processing, feature extraction, image segmentation, and image classification. During this paper, weve used various algorithms for the partial fulfillment of the necessities to hit the simplest results that may help us to detect brain tumors within the early stage. Deepa Dangwal | Aditya Nautiyal | Dakshita Adhikari | Kapil Joshi "Brain Tumor Detection using MRI Images" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Special Issue | International Conference on Advances in Engineering, Science and Technology - 2021 , May 2021, URL: https://www.ijtsrd.com/papers/ijtsrd42456.pdf Paper URL : https://www.ijtsrd.com/engineering/computer-engineering/42456/brain-tumor-detection-using-mri-images/deepa-dangwal
HIGH RESOLUTION MRI BRAIN IMAGE SEGMENTATION TECHNIQUE USING HOLDER EXPONENTijsc
Image segmentation is a technique to locate certain objects or boundaries within an image. Image
segmentation plays a crucial role in many medical imaging applications. There are many algorithms and
techniques have been developed to solve image segmentation problems. Spectral pattern is not sufficient in
high resolution image for image segmentation due to variability of spectral and structural information.
Thus the spatial pattern or texture techniques are used. Thus the concept of Holder Exponent for
segmentation of high resolution medical image is an efficient image segmentation technique. The proposed
method is implemented in Matlab and verified using various kinds of high resolution medical images. The
experimental results shows that the proposed image segmentation system is efficient than the existing
segmentation systems.
Extraction of Circle of Willis from 2D Magnetic Resonance AngiogramsIDES Editor
Magnetic resonance angiogram is a way to study
cerebrovascular structures. It helps to obtain information
regarding blood flow in a non-invasive fashion. Magnetic
resonance angiograms are examined basically for detection
of vascular pathologies, neurosurgery planning, and vascular
landmark detection. In certain cases it becomes complicated
for the doctors to assess the cerebral vessels or Circle of Willis
from the two-dimensional (2D) brain magnetic resonance
angiograms. In this paper an attempt has been made to extract
the Circle of Willis from 2D magnetic resonance angiograms,
so as to overcome such difficulties. The proposed method preprocesses
the magnetic resonance angiograms and
subsequently extracts the Circle of Willis. The extraction has
been done by color-based segmentation using K-means
clustering algorithm. As the developed method successfully
extracts the vasculature from the brain magnetic resonance
angiograms, therefore it will help the doctors for diagnosis
and serve as a step in the prevention of stroke. The algorithms
are developed on MATLAB 7.6.0 (R2008a) programming
platform.
SEGMENTATION OF MULTIPLE SCLEROSIS LESION IN BRAIN MR IMAGES USING FUZZY C-MEANSijaia
Magnetic resonance images (MRI) play an important role in supporting and substituting clinical
information in the diagnosis of multiple sclerosis (MS) disease by presenting lesion in brain MR images. In
this paper, an algorithm for MS lesion segmentation from Brain MR Images has been presented. We revisit
the modification of properties of fuzzy -c means algorithms and the canny edge detection. By changing and
reformed fuzzy c-means clustering algorithms, and applying canny contraction principle, a relationship
between MS lesions and edge detection is established. For the special case of FCM, we derive a sufficient
condition and clustering parameters, allowing identification of them as (local) minima of the objective
function.
11.texture feature based analysis of segmenting soft tissues from brain ct im...Alexander Decker
This document describes a study that used texture feature analysis and a bidirectional associative memory (BAM) type artificial neural network to segment normal and tumor tissues from brain CT images. Gray level co-occurrence matrix features were extracted from 80 CT images of normal, benign and malignant tumors. The most discriminative features were selected using t-tests and used to train the BAM network classifier to segment tissues in the images. The proposed method provided accurate segmentation of normal and tumor regions, especially small tumors, in an efficient and fast manner with less computational time compared to other methods.
A novel equalization scheme for the selective enhancement of optical disc and...TELKOMNIKA JOURNAL
The ratio of the diameters of Optic Cup (OC) and Optic Disc (OD), termed as ‘Cup to Disc Ratio’
(CDR), derived from the fundus imagery is a popular biomarker used for the diagnosis of glaucoma.
Demarcation of OC and OD either manually or through automated image processing algorithms is error
prone because of poor grey level contrast and their vague boundaries. A dedicated equalization which
simultaneously compresses the dynamic range of the background and stretches the range of ODis
proposed in this paper. Unlike the conventional GHE, in the proposed equalization, the original histogram
is inverted and weighted nonlinearly before computing the Cumulative Probability Density (CPD).
The equalization scheme is compared with Adaptive Histogram Equalization (AHE), Global Histogram
Equalization (GHE) and Contrast Limited Adaptive Histogram Equalization (CLAHE) in terms of the
difference between the mean grey levels of OD and the background, using a quantitative metric known as
Contrast Improvement Index (CII). The CII exhibited by CLAHE, GHE and the proposed scheme are
1.1977 ± 0.0326, 1.0862 ± 0.0304 and 1.3312 ± 0.0486, respectively.The proposed method is observed to
be superior to CLAHE, GHE and AHE and it can be employed in Computerized Clinical Decision Support
Systems (CCDSS) to improve the accuracy of localizing the OD and the computation of CDR.
Image Binarization for the uses of Preprocessing to Detect Brain Abnormality ...Journal For Research
Computerized MR of brain image binarization for the uses of preprocessing of features extraction and brain abnormality identification of brain has been described. Binarization is used as intermediate steps of many MR of brain normal and abnormal tissues detection. One of the main problems of MRI binarization is that many pixels of brain part cannot be correctly binarized due to the extensive black background or the large variation in contrast between background and foreground of MRI. Proposed binarization determines a threshold value using mean, variance, standard deviation and entropy followed by a non-gamut enhancement that can overcome the binarization problem. The proposed binarization technique is extensively tested with a variety of MRI and generates good binarization with improved accuracy and reduced error.
This paper primarily focuses on to employ a novel approach to classify the brain tumor and its area. The Tumor is an uncontrolled enlargement of tissues in any portion of the human body. Tumors are of several types and have some different characteristics. According to their characteristics some of them are avoidable and some are unavoidable. Brain tumor is serious and life threatening issues now days, because of today’s hectic lifestyle. Medical imaging play important role to diagnose brain tumor .In this study an automated system has been proposed to detect and calculate the area of tumor. For proposed system the experiment carried out with 150 T1 weighted MRI images. The edge based segmentation, watershed segmentation has applied for tumor, and watershed segmentation has used to extract abnormal cells from the normal cells to get the tumor identification of involved and noninvolved areas so that the radiologist differentiate the affected area. The experiment result shows tumor extraction and area of tumor find the weather it is benign and malignant.
Automatic Diagnosis of Abnormal Tumor Region from Brain Computed Tomography I...ijcseit
The research work presented in this paper is to achieve the tissue classification and automatically
diagnosis the abnormal tumor region present in Computed Tomography (CT) images using the wavelet
based statistical texture analysis method. Comparative studies of texture analysis method are performed
for the proposed wavelet based texture analysis method and Spatial Gray Level Dependence Method
(SGLDM). Our proposed system consists of four phases i) Discrete Wavelet Decomposition (ii)
Feature extraction (iii) Feature selection (iv) Analysis of extracted texture features by classifier. A
wavelet based statistical texture feature set is derived from normal and tumor regions. Genetic Algorithm
(GA) is used to select the optimal texture features from the set of extracted texture features. We construct
the Support Vector Machine (SVM) based classifier and evaluate the performance of classifier by
comparing the classification results of the SVM based classifier with the Back Propagation Neural network
classifier(BPN). The results of Support Vector Machine (SVM), BPN classifiers for the texture analysis
methods are evaluated using Receiver Operating Characteristic (ROC) analysis. Experimental results
show that the classification accuracy of SVM is 96% for 10 fold cross validation method. The system
has been tested with a number of real Computed Tomography brain images and has achieved satisfactory
results.
IRJET- An Efficient Brain Tumor Detection System using Automatic Segmenta...IRJET Journal
This document presents a proposed method for an efficient brain tumor detection system using automatic segmentation with convolutional neural networks. The proposed method uses median filtering for noise removal, Otsu's thresholding for segmentation, and morphological operations for filtering. A convolutional neural network is then used for tumor classification. The methodology is tested on a brain MRI dataset, with evaluations of performance metrics like accuracy, precision, recall, and processing time. The goal is to develop an automated system for early detection of brain tumors using deep learning techniques for analysis of medical images.
ResNet-n/DR: Automated diagnosis of diabetic retinopathy using a residual neu...TELKOMNIKA JOURNAL
Diabetic retinopathy (DR) is a progressive eye disease associated with diabetes, resulting in blindness or blurred vision. The risk of vision loss was dramatically decreased with early diagnosis and treatment. Doctors diagnose DR by examining the fundus retinal images to develop lesions associated with the disease. However, this diagnosis is a tedious and challenging task due to growing undiagnosed and untreated DR cases and the variability of retinal changes across disease stages. Manually analyzing the images has become an expensive and time-consuming task, not to mention that training new specialists takes time and requires daily practice. Our work investigates deep learning methods, particularly convolutional neural network (CNN), for DR diagnosis in the disease’s five stages. A pre-trained residual neural network (ResNet-34) was trained and tested for DR. Then, we develop computationally efficient and scalable methods after modifying a ResNet-34 with three additional residual units as a novel ResNet-n/DR. The Asia Pacific Tele-Ophthalmology Society (APTOS) 2019 dataset was used to evaluate the performance of models after applying multiple pre-processing steps to eliminate image noise and improve color contrast, thereby increasing efficiency. Our findings achieved state-of-the-art results compared to previous studies that used the same dataset. It had 90.7% sensitivity, 93.5% accuracy, 98.2% specificity, 89.5% precision, and 90.1% F1 score.
This document summarizes an automated method for detecting microaneurysms, hard exudates, and cotton wool spots in retinal fundus images. 130 retinal images were used from a public database to train and evaluate classifiers. Features were extracted from pre-processed images and candidate lesions were classified using support vector machines and k-nearest neighbors algorithms. The proposed method achieved 95.6% sensitivity, 94.87% specificity, and 95.38% accuracy in detecting lesions, outperforming random sampling. Active learning was shown to select more informative training samples than random sampling, improving classifier performance.
Automated Detection of Microaneurysm, Hard Exudates, and Cotton Wool Spots in...iosrjce
The The automatic identification of Image processing techniques for abnormalities in retinal images.
Its very importance in diabetic retinopathy screening. Manual annotations of retinal images are rare and
exclusive to obtain. The ophthalmoscope used direct analysis is a small and portable apparatus contained of a
light source and a set of lenses view the retina. The existence of diabetic retinopathy detected can be examining
the retina for its individual features. The first presence of diabetic retinopathy is the form of Microaneurysms.
This paper describes different works needed to the automatic identification of hard exudates and cotton wool
spots in retinal images for diabetic retinopathy detection and support vector machine (SVM) for classifying
images. This system is evaluated on a large dataset containing 130 retinal images. The proposed method Results
show that exudates were detected from a database with 96.9% sensitivity, specificity 96.1% and
97.38%accuracy
Blood vessel segmentation in fundus imagesIRJET Journal
This document summarizes a research paper on blood vessel segmentation in fundus images using random forest classification. The proposed method uses six steps: 1) spatial calibration of fundus images, 2) image preprocessing including illumination equalization, denoising and contrast equalization, 3) optic disc removal, 4) candidate lesion extraction, 5) dynamic shape feature extraction, and 6) random forest classification of lesions. The method was tested on public Diaretdb1 fundus image database and achieved accurate segmentation and detection of microaneurysms and hemorrhages for early diagnosis of diabetic retinopathy.
Design issues and challenges of reliable and secure transmission of medical i...eSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Computer Aided Diagnosis using Margin and Posterior Acoustic Featuresfor Brea...TELKOMNIKA JOURNAL
Breast cancer is the most commonly diagnosed cancer among females worldwide. Computer aided diagnosis (CAD) was developed to assist radiologists in detecting and evaluating nodules so it can improve diagnostic accuracy, avoid unnecessary biopsies, reduce anxiety and control costs. This research proposes a method of CAD for breast ultrasound images based on margin and posterior acoustic features. It consists of preprocessing, segmentation using active contour without edge (ACWE) and morphological, feature extraction and classification. Texture and geometry analysis was used to determine the characteristics of the posterior acoustic and margin nodules. Support vector machines (SVM) provided better performance than multilayer perceptron (MLP). The performance of proposed method achieved the accuracy of 91.35%, sensitivity of 92.00%, specificity of 89.66%, PPV of 95.83%, NPV of 81.26% and Kappa of 0.7915. These results indicate that the developed CAD has potential to be implemented for diagnosis of breast cancer using ultrasound images.
Effective segmentation of sclera, iris and pupil in noisy eye imagesTELKOMNIKA JOURNAL
In today’s sensitive environment, for personal authentication, iris recognition is the most attentive
technique among the various biometric technologies. One of the key steps in the iris recognition system is
the accurate iris segmentation from its surrounding noises including pupil and sclera of a captured
eye-image. In our proposed method, initially input image is preprocessed by using bilateral filtering.
After the preprocessing of images contour based features such as, brightness, color and texture features
are extracted. Then entropy is measured based on the extracted contour based features to effectively
distinguishing the data in the images. Finally, the convolution neural network (CNN) is used for
the effective sclera, iris and pupil parts segmentations based on the entropy measure. The proposed
results are analyzed to demonstrate the better performance of the proposed segmentation method than
the existing methods.
Multistage Classification of Alzheimer’s DiseaseIJLT EMAS
Alzheimer’s disease is a type of dementia that destroys
memory and other mental functions. During the progression of
the disease certain proteins called plaques and tangles get
deposited in hippocampus which is located in the temporal lobe
of brain. The disease is not a normal part of aging and gets
worsen over time. Medical imaging techniques like Magnetic
Resonance Imaging (MRI), Computed Tomography (CT) and
Positron Emission Tomography (PET) play significant role in the
disease diagnosis. In this paper, we propose a method for
classifying MRI into Normal Control (NC), Mild Cognitive
Impairment (MCI) and Alzheimer’s Disease(AD). An overall
outline of the methodology includes textural feature extraction,
feature reduction process and classification of the images into
various stages. Classification has been performed with three
classifiers namely Support Vector Machine (SVM), Artificial
Neural Network (ANN) and k-Nearest Neighbours (k-NN)
Transfer learning with multiple pre-trained network for fundus classificationTELKOMNIKA JOURNAL
Transfer learning (TL) is a technique of reuse and modify a pre-trained network.
It reuses feature extraction layer at a pre-trained network. A target domain in TL
obtains the features knowledge from the source domain. TL modified
classification layer at a pre-trained network. The target domain can do new tasks
according to a purpose. In this article, the target domain is fundus image
classification includes normal and neovascularization. Data consist of
100 patches. The comparison of training and validation data was 70:30.
The selection of training and validation data is done randomly. Steps of TL i.e
load pre-trained networks, replace final layers, train the network, and assess
network accuracy. First, the pre-trained network is a layer configuration of
the convolutional neural network architecture. Pre-trained network used are
AlexNet, VGG16, VGG19, ResNet50, ResNet101, GoogLeNet, Inception-V3,
InceptionResNetV2, and squeezenet. Second, replace the final layer is to replace
the last three layers. They are fully connected layer, softmax, and output layer.
The layer is replaced with a fully connected layer that classifies according to
number of classes. Furthermore, it's followed by a softmax and output layer that
matches with the target domain. Third, we trained the network. Networks were
trained to produce optimal accuracy. In this section, we use gradient descent
algorithm optimization. Fourth, assess network accuracy. The experiment results
show a testing accuracy between 80% and 100%.
Fundus Image Classification Using Two Dimensional Linear Discriminant Analysi...INFOGAIN PUBLICATION
It is constructed in this study a classification system of diabetic retinopathy fundus image. The system consists of two phases: training and testing. Each stage consists of preprocessing, segmentation, feature extraction and classification. The tested image comes from the MESSIDOR dataset which has a total of 100 images. The number of classes to be classified consists of four classes with each class consists of 25 images. The classes are normal, mild, moderate and severe of Diabetic retinopathy. In this study, the level of preprocessing uses grayscales green channel, Wavelet Haar, Gaussian filter and Contrast Limited Adaptive Histogram Equalization. The level of segmentation uses masking as a process of doing the subtracting operation of between the original image and the masking image. The purpose of the masking is to split between the object and the background. The feature extraction uses Two Dimensional Linear Discriminant Analysis (2DLDA). The classification uses Support Vector Machine (SVM). The test results of some scenarios show that the highest percentage of accuration of the system is up to 90%.
This document discusses feature extraction techniques for Alzheimer's disease diagnosis using PET scans. It examines voxel intensity, scale space representation through Gaussian pyramids, and a local variance operator to capture local contrast. Voxel intensity provides direct measurement of FDG uptake but neighboring voxels are highly correlated. The scale space representation reduces redundancy through smoothing and subsampling. The local variance operator measures variance within a 3D neighborhood to estimate local contrast, allowing features at different scales by varying neighborhood radius. These feature extraction methods aim to effectively represent relevant information from PET scans to improve computer-aided Alzheimer's diagnosis.
Brain Tumor Detection using MRI ImagesYogeshIJTSRD
Brain tumor segmentation is a very important task in medical image processing. Early diagnosis of brain tumors plays a crucial role in improving treatment possibilities and increases the survival rate of the patients. For the study of tumor detection and segmentation, MRI Images are very useful in recent years. One of the foremost crucial tasks in any brain tumor detection system is that the detachment of abnormal tissues from normal brain tissues. Because of MRI Images, we will detect the brain tumor. Detection of unusual growth of tissues and blocks of blood within the system is seen in an MRI Imaging. Brain tumor detection using MRI images may be a challenging task due to the brains complex structure.In this paper, we propose an image segmentation method to detect tumors from MRI images using an interface of GUI in MATLAB. The method of distinguishing brain tumors through MRI images is often sorted into four sections of image processing as pre processing, feature extraction, image segmentation, and image classification. During this paper, weve used various algorithms for the partial fulfillment of the necessities to hit the simplest results that may help us to detect brain tumors within the early stage. Deepa Dangwal | Aditya Nautiyal | Dakshita Adhikari | Kapil Joshi "Brain Tumor Detection using MRI Images" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Special Issue | International Conference on Advances in Engineering, Science and Technology - 2021 , May 2021, URL: https://www.ijtsrd.com/papers/ijtsrd42456.pdf Paper URL : https://www.ijtsrd.com/engineering/computer-engineering/42456/brain-tumor-detection-using-mri-images/deepa-dangwal
HIGH RESOLUTION MRI BRAIN IMAGE SEGMENTATION TECHNIQUE USING HOLDER EXPONENTijsc
Image segmentation is a technique to locate certain objects or boundaries within an image. Image
segmentation plays a crucial role in many medical imaging applications. There are many algorithms and
techniques have been developed to solve image segmentation problems. Spectral pattern is not sufficient in
high resolution image for image segmentation due to variability of spectral and structural information.
Thus the spatial pattern or texture techniques are used. Thus the concept of Holder Exponent for
segmentation of high resolution medical image is an efficient image segmentation technique. The proposed
method is implemented in Matlab and verified using various kinds of high resolution medical images. The
experimental results shows that the proposed image segmentation system is efficient than the existing
segmentation systems.
Extraction of Circle of Willis from 2D Magnetic Resonance AngiogramsIDES Editor
Magnetic resonance angiogram is a way to study
cerebrovascular structures. It helps to obtain information
regarding blood flow in a non-invasive fashion. Magnetic
resonance angiograms are examined basically for detection
of vascular pathologies, neurosurgery planning, and vascular
landmark detection. In certain cases it becomes complicated
for the doctors to assess the cerebral vessels or Circle of Willis
from the two-dimensional (2D) brain magnetic resonance
angiograms. In this paper an attempt has been made to extract
the Circle of Willis from 2D magnetic resonance angiograms,
so as to overcome such difficulties. The proposed method preprocesses
the magnetic resonance angiograms and
subsequently extracts the Circle of Willis. The extraction has
been done by color-based segmentation using K-means
clustering algorithm. As the developed method successfully
extracts the vasculature from the brain magnetic resonance
angiograms, therefore it will help the doctors for diagnosis
and serve as a step in the prevention of stroke. The algorithms
are developed on MATLAB 7.6.0 (R2008a) programming
platform.
SEGMENTATION OF MULTIPLE SCLEROSIS LESION IN BRAIN MR IMAGES USING FUZZY C-MEANSijaia
Magnetic resonance images (MRI) play an important role in supporting and substituting clinical
information in the diagnosis of multiple sclerosis (MS) disease by presenting lesion in brain MR images. In
this paper, an algorithm for MS lesion segmentation from Brain MR Images has been presented. We revisit
the modification of properties of fuzzy -c means algorithms and the canny edge detection. By changing and
reformed fuzzy c-means clustering algorithms, and applying canny contraction principle, a relationship
between MS lesions and edge detection is established. For the special case of FCM, we derive a sufficient
condition and clustering parameters, allowing identification of them as (local) minima of the objective
function.
11.texture feature based analysis of segmenting soft tissues from brain ct im...Alexander Decker
This document describes a study that used texture feature analysis and a bidirectional associative memory (BAM) type artificial neural network to segment normal and tumor tissues from brain CT images. Gray level co-occurrence matrix features were extracted from 80 CT images of normal, benign and malignant tumors. The most discriminative features were selected using t-tests and used to train the BAM network classifier to segment tissues in the images. The proposed method provided accurate segmentation of normal and tumor regions, especially small tumors, in an efficient and fast manner with less computational time compared to other methods.
A novel equalization scheme for the selective enhancement of optical disc and...TELKOMNIKA JOURNAL
The ratio of the diameters of Optic Cup (OC) and Optic Disc (OD), termed as ‘Cup to Disc Ratio’
(CDR), derived from the fundus imagery is a popular biomarker used for the diagnosis of glaucoma.
Demarcation of OC and OD either manually or through automated image processing algorithms is error
prone because of poor grey level contrast and their vague boundaries. A dedicated equalization which
simultaneously compresses the dynamic range of the background and stretches the range of ODis
proposed in this paper. Unlike the conventional GHE, in the proposed equalization, the original histogram
is inverted and weighted nonlinearly before computing the Cumulative Probability Density (CPD).
The equalization scheme is compared with Adaptive Histogram Equalization (AHE), Global Histogram
Equalization (GHE) and Contrast Limited Adaptive Histogram Equalization (CLAHE) in terms of the
difference between the mean grey levels of OD and the background, using a quantitative metric known as
Contrast Improvement Index (CII). The CII exhibited by CLAHE, GHE and the proposed scheme are
1.1977 ± 0.0326, 1.0862 ± 0.0304 and 1.3312 ± 0.0486, respectively.The proposed method is observed to
be superior to CLAHE, GHE and AHE and it can be employed in Computerized Clinical Decision Support
Systems (CCDSS) to improve the accuracy of localizing the OD and the computation of CDR.
Image Binarization for the uses of Preprocessing to Detect Brain Abnormality ...Journal For Research
Computerized MR of brain image binarization for the uses of preprocessing of features extraction and brain abnormality identification of brain has been described. Binarization is used as intermediate steps of many MR of brain normal and abnormal tissues detection. One of the main problems of MRI binarization is that many pixels of brain part cannot be correctly binarized due to the extensive black background or the large variation in contrast between background and foreground of MRI. Proposed binarization determines a threshold value using mean, variance, standard deviation and entropy followed by a non-gamut enhancement that can overcome the binarization problem. The proposed binarization technique is extensively tested with a variety of MRI and generates good binarization with improved accuracy and reduced error.
This paper primarily focuses on to employ a novel approach to classify the brain tumor and its area. The Tumor is an uncontrolled enlargement of tissues in any portion of the human body. Tumors are of several types and have some different characteristics. According to their characteristics some of them are avoidable and some are unavoidable. Brain tumor is serious and life threatening issues now days, because of today’s hectic lifestyle. Medical imaging play important role to diagnose brain tumor .In this study an automated system has been proposed to detect and calculate the area of tumor. For proposed system the experiment carried out with 150 T1 weighted MRI images. The edge based segmentation, watershed segmentation has applied for tumor, and watershed segmentation has used to extract abnormal cells from the normal cells to get the tumor identification of involved and noninvolved areas so that the radiologist differentiate the affected area. The experiment result shows tumor extraction and area of tumor find the weather it is benign and malignant.
Automatic Diagnosis of Abnormal Tumor Region from Brain Computed Tomography I...ijcseit
The research work presented in this paper is to achieve the tissue classification and automatically
diagnosis the abnormal tumor region present in Computed Tomography (CT) images using the wavelet
based statistical texture analysis method. Comparative studies of texture analysis method are performed
for the proposed wavelet based texture analysis method and Spatial Gray Level Dependence Method
(SGLDM). Our proposed system consists of four phases i) Discrete Wavelet Decomposition (ii)
Feature extraction (iii) Feature selection (iv) Analysis of extracted texture features by classifier. A
wavelet based statistical texture feature set is derived from normal and tumor regions. Genetic Algorithm
(GA) is used to select the optimal texture features from the set of extracted texture features. We construct
the Support Vector Machine (SVM) based classifier and evaluate the performance of classifier by
comparing the classification results of the SVM based classifier with the Back Propagation Neural network
classifier(BPN). The results of Support Vector Machine (SVM), BPN classifiers for the texture analysis
methods are evaluated using Receiver Operating Characteristic (ROC) analysis. Experimental results
show that the classification accuracy of SVM is 96% for 10 fold cross validation method. The system
has been tested with a number of real Computed Tomography brain images and has achieved satisfactory
results.
IRJET- An Efficient Brain Tumor Detection System using Automatic Segmenta...IRJET Journal
This document presents a proposed method for an efficient brain tumor detection system using automatic segmentation with convolutional neural networks. The proposed method uses median filtering for noise removal, Otsu's thresholding for segmentation, and morphological operations for filtering. A convolutional neural network is then used for tumor classification. The methodology is tested on a brain MRI dataset, with evaluations of performance metrics like accuracy, precision, recall, and processing time. The goal is to develop an automated system for early detection of brain tumors using deep learning techniques for analysis of medical images.
ResNet-n/DR: Automated diagnosis of diabetic retinopathy using a residual neu...TELKOMNIKA JOURNAL
Diabetic retinopathy (DR) is a progressive eye disease associated with diabetes, resulting in blindness or blurred vision. The risk of vision loss was dramatically decreased with early diagnosis and treatment. Doctors diagnose DR by examining the fundus retinal images to develop lesions associated with the disease. However, this diagnosis is a tedious and challenging task due to growing undiagnosed and untreated DR cases and the variability of retinal changes across disease stages. Manually analyzing the images has become an expensive and time-consuming task, not to mention that training new specialists takes time and requires daily practice. Our work investigates deep learning methods, particularly convolutional neural network (CNN), for DR diagnosis in the disease’s five stages. A pre-trained residual neural network (ResNet-34) was trained and tested for DR. Then, we develop computationally efficient and scalable methods after modifying a ResNet-34 with three additional residual units as a novel ResNet-n/DR. The Asia Pacific Tele-Ophthalmology Society (APTOS) 2019 dataset was used to evaluate the performance of models after applying multiple pre-processing steps to eliminate image noise and improve color contrast, thereby increasing efficiency. Our findings achieved state-of-the-art results compared to previous studies that used the same dataset. It had 90.7% sensitivity, 93.5% accuracy, 98.2% specificity, 89.5% precision, and 90.1% F1 score.
This document summarizes an automated method for detecting microaneurysms, hard exudates, and cotton wool spots in retinal fundus images. 130 retinal images were used from a public database to train and evaluate classifiers. Features were extracted from pre-processed images and candidate lesions were classified using support vector machines and k-nearest neighbors algorithms. The proposed method achieved 95.6% sensitivity, 94.87% specificity, and 95.38% accuracy in detecting lesions, outperforming random sampling. Active learning was shown to select more informative training samples than random sampling, improving classifier performance.
Automated Detection of Microaneurysm, Hard Exudates, and Cotton Wool Spots in...iosrjce
The The automatic identification of Image processing techniques for abnormalities in retinal images.
Its very importance in diabetic retinopathy screening. Manual annotations of retinal images are rare and
exclusive to obtain. The ophthalmoscope used direct analysis is a small and portable apparatus contained of a
light source and a set of lenses view the retina. The existence of diabetic retinopathy detected can be examining
the retina for its individual features. The first presence of diabetic retinopathy is the form of Microaneurysms.
This paper describes different works needed to the automatic identification of hard exudates and cotton wool
spots in retinal images for diabetic retinopathy detection and support vector machine (SVM) for classifying
images. This system is evaluated on a large dataset containing 130 retinal images. The proposed method Results
show that exudates were detected from a database with 96.9% sensitivity, specificity 96.1% and
97.38%accuracy
Using Deep Learning and Transfer Learning for Pneumonia DetectionIRJET Journal
This document presents research on using deep learning and transfer learning models to detect pneumonia from chest x-ray images. The researchers collected a dataset of over 5,000 chest x-rays and split it into training and test sets. Data augmentation techniques were used to expand the training dataset size. Several deep learning models were trained on the data including CNN, DenseNet121, VGG16, ResNet50, and Inception v3. Transfer learning was also utilized by training models pre-trained on ImageNet. The Inception v3 model achieved the highest testing accuracy of 80.29% for pneumonia detection. The researchers concluded the deep learning models could help radiologists diagnose pneumonia but that more work is needed to localize affected lung regions.
IRJET -An Automatated Learning Approach for Detection of Diabetic Retinopathy...IRJET Journal
This document presents a review of existing methods for detecting diabetic retinopathy (DR) using fundus images and proposes a new automated learning approach using deep learning. Existing methods are discussed, including those using artificial neural networks, modified AlexNet architecture, convolutional neural networks, and detection of retinal changes in longitudinal fundus images. The proposed method uses contrast limited adaptive histogram equalization for segmentation followed by a deep belief network for classification of DR stages. It aims to provide an optimal and automated solution for classifying DR severity levels using fundus images to help ophthalmologists detect the condition early. The model will be trained and evaluated on a publicly available dataset from Kaggle to classify images as healthy, normal, mild, moderate, severe
A Survey on techniques for Diabetic Retinopathy Detection & ClassificationIRJET Journal
This document summarizes research on techniques for detecting and classifying diabetic retinopathy through analysis of fundus images. Diabetic retinopathy is a complication of diabetes that can lead to vision loss if undetected. Recent studies have used deep learning techniques like convolutional neural networks to automatically detect features in fundus images and classify levels of diabetic retinopathy more accurately than manual screening. The document reviews 21 recent papers that tested techniques including CNNs, support vector machines, ensemble models and more to detect microaneurysms and other lesions associated with diabetic retinopathy in fundus images. Many achieved high accuracy above 90% in detecting and diagnosing diabetic retinopathy levels from mild to severe.
Pneumonia prediction on chest x-ray images using deep learning approachIAESIJAI
Coronavirus disease 2019 (COVID-19) is an infectious disease with first symptoms similar to the flu. In many cases, this disease causes pneumonia. Since pulmonary infections can be observed through radiography images, this paper investigates deep learning methods for automatically analyzing query chest x-ray images. In deep learning, computers can automatically identify useful features for the model, directly from the raw data, bypassing the difficult step of manual information refinement. The main part of the deep learning method is the focus on automatically learning data representations. Visual geometry group 16 (VGG16) and DenseNet121 are methods in deep learning. The data used is a chest x-ray of pneumonia. Data is divided into training, testing, and validation. The best method for this research case is VGG16 with 93% accuracy training and 90% accuracy testing. In this study, DenseNet121 obtained accuracy below VGG16, with 92% accuracy in training and 88% for accuracy testing. Parameters have a significant influence on the accuracy of each model, and with the parameters that have been used, the VGG16 is a method that has high accuracy and can be used to predict chest x-ray images aimed at checking pneumonia in patients.
An improved dynamic-layered classification of retinal diseasesIAESIJAI
Retina is main part of the human eye and every disease shows the effect on retina. Eye diseases such as choroidal neovascularization (CNV), DRUSEN, diabetic macular edema (DME) are the main retinal diseases that damage the retina and if these damages are identified in the later stages, it is very difficult to reverse the vision for these retinal diseases. Optical coherence tomography (OCT) is a non-nosy image testing for finding the retinal diseases. OCT mainly collects the cross-section images of retina. Deep learning (DL) is used to analyze the patterns in several complex research applications especially in the disease prediction. In DL, multiple layers give the accurate detection of abnormalities in the retinal images. In this paper, an improved dynamic-layered classification (IDLC) is introduced to classify retinal diseases based on their abnormality. Image filters are used to filter the noise present in the input images. ResNet is the pre-trained model which is used to train the features of retinal diseases. Convolutional neural networks (CNN) are the DL model used to analyze the OCT image. The dataset consists of three types of OCT disease datasets from Kaggle. Evaluation results show the performance of IDLC compared with state-of-art algorithms. A better performance is obtained by using the IDLC and achieved the better accuracy.
Discovering Abnormal Patches and Transformations of Diabetics Retinopathy in ...cscpconf
Diabetic retinopathy (DR) is one of the retinal diseases due to long-term effect of diabetes.
Early detection for diabetic retinopathy is crucial since timely treatment can prevent
progressive loss of vision. The most common diagnosis technique of diabetic retinopathy is to
screen abnormalities through retinal fundus images by clinicians. However, limited number of
well-trained clinicians increase the possibilities of misdiagnosing. In this work, we propose a
big-data-driven automatic computer-aided diagnosing (CAD) system for diabetic retinopathy
severity regression based on transfer learning, which starts from a deep convolutional neural
network pre-trained on generic images, and adapts it to large-scale DR datasets. From images
in the training set, we also automatically segment the abnormal patches with an occlusion test,
and model the transformations and deterioration process of DR. Our results can be widely used
for fast diagnosis of DR, medical education and public-level healthcare propagation.
DISCOVERING ABNORMAL PATCHES AND TRANSFORMATIONS OF DIABETICS RETINOPATHY IN ...csandit
Diabetic retinopathy (DR) is one of the retinal diseases due to long-term effect of diabetes.Early detection for diabetic retinopathy is crucial since timely treatment can prevent
progressive loss of vision. The most common diagnosis technique of diabetic retinopathy is to screen abnormalities through retinal fundus images by clinicians. However, limited number of well-trained clinicians increase the possibilities of misdiagnosing. In this work, we propose a big-data-driven automatic computer-aided diagnosing (CAD) system for diabetic retinopathy severity regression based on transfer learning, which starts from a deep convolutional neural
network pre-trained on generic images, and adapts it to large-scale DR datasets. From images in the training set, we also automatically segment the abnormal patches with an occlusion test,and model the transformations and deterioration process of DR. Our results can be widely used for fast diagnosis of DR, medical education and public-level healthcare propagation.
Haemorrhage Detection and Classification: A ReviewIJERA Editor
In Indian population, the count of diabetic peoples gets increasing day by day. Due to improper balance of insulin in the human body causes Diabetic. The most common symptom of the person with diabetes is diabetic retinopathy, which leads to blindness. The effect due to DR can reduce by early detection of Haemorrhages and treated at an early stage. In recent year, there is an increased interest in the field of medical image processing. Many researchers have developed advanced algorithms for Haemorrhage detection using fundus images. In proposed paper, we discuss various methods for Haemorrhage detection and classification.
Rapid detection of diabetic retinopathy in retinal images: a new approach usi...IJECEIAES
The challenge of early detection of diabetic retinopathy (DR), a leading cause of vision loss in working-age individuals in developed nations, was addressed in this study. Current manual analysis of digital color fundus photographs by clinicians, although thorough, suffers from slow result turnaround, delaying necessary treatment. To expedite detection and improve treatment timeliness, a novel automated detection system for DR was developed. This system utilized convolutional neural networks. Visual geometry group 16-layer network (VGG16), a pre-trained deep learning model, for feature extraction from retinal images and the synthetic minority over-sampling technique (SMOTE) to handle class imbalance in the dataset. The system was designed to classify images into five categories: normal, mild DR, moderate DR, severe DR, and proliferative DR (PDR). Assessment of the system using the Kaggle diabetic retinopathy dataset resulted in a promising 93.94% accuracy during the training phase and 88.19% during validation. These results highlight the system's potential to enhance DR diagnosis speed and efficiency, leading to improved patient outcomes. The study concluded that automation and artificial intelligence (AI) could play a significant role in timely and efficient disease detection and management.
Early Stage Detection of Alzheimer’s Disease Using Deep LearningIRJET Journal
This document presents a study on early detection of Alzheimer's disease using deep learning techniques. The authors build an end-to-end framework for classifying MRI images into four stages of Alzheimer's - mild demented, very mild demented, moderate demented, and non-demented. They use transfer learning with pre-trained models like VGG16 and ResNet50, achieving accuracies of 95% and 84% respectively. A custom CNN model is also developed, achieving 93% accuracy. The models are trained on a dataset of 6400 MRI images from Kaggle. A web application is also created for remote analysis and detection of Alzheimer's stage to help doctors and patients.
An effective deep learning network for detecting and classifying glaucomatous...IJECEIAES
Glaucoma is a well-known complex disease of the optic nerve that gradually damages eyesight due to the increase of intraocular pressure inside the eyes. Among two types of glaucoma, open-angle glaucoma is mostly happened by high intraocular pressure and can damage the eyes temporarily or sometimes permanently, another one is angle-closure glaucoma. Therefore, being diagnosed in the early stage is necessary to safe our vision. There are several ways to detect glaucomatous eyes like tonometry, perimetry, and gonioscopy but require time and expertise. Using deep learning approaches could be a better solution. This study focused on the recognition of open-angle affected eyes from the fundus images using deep learning techniques. The study evolved by applying VGG16, VGG19, and ResNet50 deep neural network architectures for classifying glaucoma positive and negative eyes. The experiment was executed on a public dataset collected from Kaggle; however, every model performed better after augmenting the dataset, and the accuracy was between 93% and 97.56%. Among the three models, VGG19 achieved the highest accuracy at 97.56%.
Abstract:
A technique for exudate detectionin fundus image is been presented in this paper. Due to diabetic retinopathy an abnormality is caused known as exudates.The loss of vision can be prevented by detecting the exudates as early as possible. The work mainly aims at detecting exudates which is present in the green channel of the RGB image by applying few preprocessing steps, DWT and feature extraction. The extracted features are fed to 3 different classifiers such as KNN, SVM and NN. Based on the classifier result if an exudate is present the extraction of exudate ROI is done based on canny edge detection followed by morphological operations. The severity of the exudates is established on the area of the detected exudate.
Keywords:Exudates, Fundus image, Diabetic retinopathy, DWT, KNN, SVM, NN, Canny edge detection, Morphological operations.
This document presents a method for detecting glaucoma from retinal images using convolutional neural networks. Glaucoma causes irreversible blindness if not detected early. Existing methods use hand-crafted feature extraction and classification techniques, but deep learning methods can learn discriminative features automatically. The proposed system uses a CNN for automated glaucoma detection. It extracts the optic cup and disc as regions of interest from preprocessed images. Features like cup-to-disc ratio are then used to classify images as healthy or glaucomatous using the CNN. The CNN architecture includes convolutional, pooling and fully connected layers with ReLU and sigmoid activation functions. This fully automated deep learning approach can detect glaucoma even in early stages from fundus images
A REVIEW PAPER ON THE DETECTION OF DIABETIC RETINOPATHYIRJET Journal
This document reviews different techniques for detecting diabetic retinopathy through automated methods. Diabetic retinopathy damages blood vessels in the retina and can cause vision loss if untreated. The review examines methods that use convolutional neural networks and deep learning models to classify images of the retina as normal, mild, moderate or proliferative diabetic retinopathy. These automated detection methods can analyze retina images faster and less expensively than manual examination. The review also evaluates the accuracy of different models, finding levels from 73% to 99% depending on the technique and dataset used. Overall, the document demonstrates that automated detection through deep learning is superior to manual detection for identifying diabetic retinopathy in a timely manner.
There are three major complications of diabetes which lead to blindness. They are retinopathy, cataracts, and glaucoma among which diabetic retinopathy is considered as the most serious complication affecting the blood vessels in the retina. Diabetic retinopathy (DR) occurs when tiny vessels swell and leak fluid or abnormal new blood vessels grow hampering normal vision.
Diabetic retinopathy is a widespread problem of visual impairment. The abnormalities like microaneurysms, hemorrhages and exudates are the key symptoms which play an important role in diagnosis of diabetic retinopathy. Early detection of these abnormalities may prevent the blurred vision or vision loss due to diabetic retinopathy. Basically exudates are lipid lesions able to be seen in optical images. Exudates are categorized into hard exudates and soft exudates based on its appearance. Hard exudates come out as intense yellow regions and soft exudates have fuzzy manifestations. Automatic detection of exudates may aid ophthalmologists in diagnosis of diabetic retinopathy and its early treatment. Fig. 1 shows the key symptoms of diabetic retinopathy.
A deep learning approach based on stochastic gradient descent and least absol...nooriasukmaningtyas
More than eighty-five to ninety percentage of the diabetic patients are affected with diabetic retinopathy (DR) which is an eye disorder that leads to blindness. The computational techniques can support to detect the DR by using the retinal images. However, it is hard to measure the DR with the raw retinal image. This paper proposes an effective method for identification of DR from the retinal images. In this research work, initially the Weiner filter is used for preprocessing the raw retinal image. Then the preprocessed image is segmented using fuzzy c-mean technique. Then from the segmented image, the features are extracted using grey level co-occurrence matrix (GLCM). After extracting the fundus image, the feature selection is performed stochastic gradient descent, and least absolute shrinkage and selection operator (LASSO) for accurate identification during the classification process. Then the inception v3-convolutional neural network (IV3-CNN) model is used in the classification process to classify the image as DR image or non-DR image. By applying the proposed method, the classification performance of IV3-CNN model in identifying DR is studied. Using the proposed method, the DR is identified with the accuracy of about 95%, and the processed retinal image is identified as mild DR.
Extracting Features from the fundus image using Canny edge detection method f...vivatechijri
The document discusses extracting features from fundus images using the Canny edge detection method to enable pre-detection of diabetic retinopathy. It begins by introducing diabetic retinopathy and the importance of early detection. It then discusses applying Canny edge detection to extract features like edges from fundus images. The features extracted can be used for diabetic retinopathy detection using machine learning methods. Results show Canny edge detection successfully extracted features like edges from sample images that could aid diabetic retinopathy diagnosis. The technique presents a potential method for automated pre-detection of the disease.
Similar to Classification of neovascularization using convolutional neural network model (20)
Amazon products reviews classification based on machine learning, deep learni...TELKOMNIKA JOURNAL
In recent times, the trend of online shopping through e-commerce stores and websites has grown to a huge extent. Whenever a product is purchased on an e-commerce platform, people leave their reviews about the product. These reviews are very helpful for the store owners and the product’s manufacturers for the betterment of their work process as well as product quality. An automated system is proposed in this work that operates on two datasets D1 and D2 obtained from Amazon. After certain preprocessing steps, N-gram and word embedding-based features are extracted using term frequency-inverse document frequency (TF-IDF), bag of words (BoW) and global vectors (GloVe), and Word2vec, respectively. Four machine learning (ML) models support vector machines (SVM), logistic regression (RF), logistic regression (LR), multinomial Naïve Bayes (MNB), two deep learning (DL) models convolutional neural network (CNN), long-short term memory (LSTM), and standalone bidirectional encoder representations (BERT) are used to classify reviews as either positive or negative. The results obtained by the standard ML, DL models and BERT are evaluated using certain performance evaluation measures. BERT turns out to be the best-performing model in the case of D1 with an accuracy of 90% on features derived by word embedding models while the CNN provides the best accuracy of 97% upon word embedding features in the case of D2. The proposed model shows better overall performance on D2 as compared to D1.
Design, simulation, and analysis of microstrip patch antenna for wireless app...TELKOMNIKA JOURNAL
In this study, a microstrip patch antenna that works at 3.6 GHz was built and tested to see how well it works. In this work, Rogers RT/Duroid 5880 has been used as the substrate material, with a dielectric permittivity of 2.2 and a thickness of 0.3451 mm; it serves as the base for the examined antenna. The computer simulation technology (CST) studio suite is utilized to show the recommended antenna design. The goal of this study was to get a more extensive transmission capacity, a lower voltage standing wave ratio (VSWR), and a lower return loss, but the main goal was to get a higher gain, directivity, and efficiency. After simulation, the return loss, gain, directivity, bandwidth, and efficiency of the supplied antenna are found to be -17.626 dB, 9.671 dBi, 9.924 dBi, 0.2 GHz, and 97.45%, respectively. Besides, the recreation uncovered that the transfer speed side-lobe level at phi was much better than those of the earlier works, at -28.8 dB, respectively. Thus, it makes a solid contender for remote innovation and more robust communication.
Design and simulation an optimal enhanced PI controller for congestion avoida...TELKOMNIKA JOURNAL
This document describes using a snake optimization algorithm to tune the gains of an enhanced proportional-integral controller for congestion avoidance in a TCP/AQM system. The controller aims to maintain a stable and desired queue size without noise or transmission problems. A linearized model of the TCP/AQM system is presented. An enhanced PI controller combining nonlinear gain and original PI gains is proposed. The snake optimization algorithm is then used to tune the parameters of the enhanced PI controller to achieve optimal system performance and response. Simulation results are discussed showing the proposed controller provides a stable and robust behavior for congestion control.
Improving the detection of intrusion in vehicular ad-hoc networks with modifi...TELKOMNIKA JOURNAL
Vehicular ad-hoc networks (VANETs) are wireless-equipped vehicles that form networks along the road. The security of this network has been a major challenge. The identity-based cryptosystem (IBC) previously used to secure the networks suffers from membership authentication security features. This paper focuses on improving the detection of intruders in VANETs with a modified identity-based cryptosystem (MIBC). The MIBC is developed using a non-singular elliptic curve with Lagrange interpolation. The public key of vehicles and roadside units on the network are derived from number plates and location identification numbers, respectively. Pseudo-identities are used to mask the real identity of users to preserve their privacy. The membership authentication mechanism ensures that only valid and authenticated members of the network are allowed to join the network. The performance of the MIBC is evaluated using intrusion detection ratio (IDR) and computation time (CT) and then validated with the existing IBC. The result obtained shows that the MIBC recorded an IDR of 99.3% against 94.3% obtained for the existing identity-based cryptosystem (EIBC) for 140 unregistered vehicles attempting to intrude on the network. The MIBC shows lower CT values of 1.17 ms against 1.70 ms for EIBC. The MIBC can be used to improve the security of VANETs.
Conceptual model of internet banking adoption with perceived risk and trust f...TELKOMNIKA JOURNAL
Understanding the primary factors of internet banking (IB) acceptance is critical for both banks and users; nevertheless, our knowledge of the role of users’ perceived risk and trust in IB adoption is limited. As a result, we develop a conceptual model by incorporating perceived risk and trust into the technology acceptance model (TAM) theory toward the IB. The proper research emphasized that the most essential component in explaining IB adoption behavior is behavioral intention to use IB adoption. TAM is helpful for figuring out how elements that affect IB adoption are connected to one another. According to previous literature on IB and the use of such technology in Iraq, one has to choose a theoretical foundation that may justify the acceptance of IB from the customer’s perspective. The conceptual model was therefore constructed using the TAM as a foundation. Furthermore, perceived risk and trust were added to the TAM dimensions as external factors. The key objective of this work was to extend the TAM to construct a conceptual model for IB adoption and to get sufficient theoretical support from the existing literature for the essential elements and their relationships in order to unearth new insights about factors responsible for IB adoption.
Efficient combined fuzzy logic and LMS algorithm for smart antennaTELKOMNIKA JOURNAL
The smart antennas are broadly used in wireless communication. The least mean square (LMS) algorithm is a procedure that is concerned in controlling the smart antenna pattern to accommodate specified requirements such as steering the beam toward the desired signal, in addition to placing the deep nulls in the direction of unwanted signals. The conventional LMS (C-LMS) has some drawbacks like slow convergence speed besides high steady state fluctuation error. To overcome these shortcomings, the present paper adopts an adaptive fuzzy control step size least mean square (FC-LMS) algorithm to adjust its step size. Computer simulation outcomes illustrate that the given model has fast convergence rate as well as low mean square error steady state.
Design and implementation of a LoRa-based system for warning of forest fireTELKOMNIKA JOURNAL
This paper presents the design and implementation of a forest fire monitoring and warning system based on long range (LoRa) technology, a novel ultra-low power consumption and long-range wireless communication technology for remote sensing applications. The proposed system includes a wireless sensor network that records environmental parameters such as temperature, humidity, wind speed, and carbon dioxide (CO2) concentration in the air, as well as taking infrared photos.The data collected at each sensor node will be transmitted to the gateway via LoRa wireless transmission. Data will be collected, processed, and uploaded to a cloud database at the gateway. An Android smartphone application that allows anyone to easily view the recorded data has been developed. When a fire is detected, the system will sound a siren and send a warning message to the responsible personnel, instructing them to take appropriate action. Experiments in Tram Chim Park, Vietnam, have been conducted to verify and evaluate the operation of the system.
Wavelet-based sensing technique in cognitive radio networkTELKOMNIKA JOURNAL
Cognitive radio is a smart radio that can change its transmitter parameter based on interaction with the environment in which it operates. The demand for frequency spectrum is growing due to a big data issue as many Internet of Things (IoT) devices are in the network. Based on previous research, most frequency spectrum was used, but some spectrums were not used, called spectrum hole. Energy detection is one of the spectrum sensing methods that has been frequently used since it is easy to use and does not require license users to have any prior signal understanding. But this technique is incapable of detecting at low signal-to-noise ratio (SNR) levels. Therefore, the wavelet-based sensing is proposed to overcome this issue and detect spectrum holes. The main objective of this work is to evaluate the performance of wavelet-based sensing and compare it with the energy detection technique. The findings show that the percentage of detection in wavelet-based sensing is 83% higher than energy detection performance. This result indicates that the wavelet-based sensing has higher precision in detection and the interference towards primary user can be decreased.
A novel compact dual-band bandstop filter with enhanced rejection bandsTELKOMNIKA JOURNAL
In this paper, we present the design of a new wide dual-band bandstop filter (DBBSF) using nonuniform transmission lines. The method used to design this filter is to replace conventional uniform transmission lines with nonuniform lines governed by a truncated Fourier series. Based on how impedances are profiled in the proposed DBBSF structure, the fractional bandwidths of the two 10 dB-down rejection bands are widened to 39.72% and 52.63%, respectively, and the physical size has been reduced compared to that of the filter with the uniform transmission lines. The results of the electromagnetic (EM) simulation support the obtained analytical response and show an improved frequency behavior.
Deep learning approach to DDoS attack with imbalanced data at the application...TELKOMNIKA JOURNAL
A distributed denial of service (DDoS) attack is where one or more computers attack or target a server computer, by flooding internet traffic to the server. As a result, the server cannot be accessed by legitimate users. A result of this attack causes enormous losses for a company because it can reduce the level of user trust, and reduce the company’s reputation to lose customers due to downtime. One of the services at the application layer that can be accessed by users is a web-based lightweight directory access protocol (LDAP) service that can provide safe and easy services to access directory applications. We used a deep learning approach to detect DDoS attacks on the CICDDoS 2019 dataset on a complex computer network at the application layer to get fast and accurate results for dealing with unbalanced data. Based on the results obtained, it is observed that DDoS attack detection using a deep learning approach on imbalanced data performs better when implemented using synthetic minority oversampling technique (SMOTE) method for binary classes. On the other hand, the proposed deep learning approach performs better for detecting DDoS attacks in multiclass when implemented using the adaptive synthetic (ADASYN) method.
The appearance of uncertainties and disturbances often effects the characteristics of either linear or nonlinear systems. Plus, the stabilization process may be deteriorated thus incurring a catastrophic effect to the system performance. As such, this manuscript addresses the concept of matching condition for the systems that are suffering from miss-match uncertainties and exogeneous disturbances. The perturbation towards the system at hand is assumed to be known and unbounded. To reach this outcome, uncertainties and their classifications are reviewed thoroughly. The structural matching condition is proposed and tabulated in the proposition 1. Two types of mathematical expressions are presented to distinguish the system with matched uncertainty and the system with miss-matched uncertainty. Lastly, two-dimensional numerical expressions are provided to practice the proposed proposition. The outcome shows that matching condition has the ability to change the system to a design-friendly model for asymptotic stabilization.
Implementation of FinFET technology based low power 4×4 Wallace tree multipli...TELKOMNIKA JOURNAL
Many systems, including digital signal processors, finite impulse response (FIR) filters, application-specific integrated circuits, and microprocessors, use multipliers. The demand for low power multipliers is gradually rising day by day in the current technological trend. In this study, we describe a 4×4 Wallace multiplier based on a carry select adder (CSA) that uses less power and has a better power delay product than existing multipliers. HSPICE tool at 16 nm technology is used to simulate the results. In comparison to the traditional CSA-based multiplier, which has a power consumption of 1.7 µW and power delay product (PDP) of 57.3 fJ, the results demonstrate that the Wallace multiplier design employing CSA with first zero finding logic (FZF) logic has the lowest power consumption of 1.4 µW and PDP of 27.5 fJ.
Evaluation of the weighted-overlap add model with massive MIMO in a 5G systemTELKOMNIKA JOURNAL
The flaw in 5G orthogonal frequency division multiplexing (OFDM) becomes apparent in high-speed situations. Because the doppler effect causes frequency shifts, the orthogonality of OFDM subcarriers is broken, lowering both their bit error rate (BER) and throughput output. As part of this research, we use a novel design that combines massive multiple input multiple output (MIMO) and weighted overlap and add (WOLA) to improve the performance of 5G systems. To determine which design is superior, throughput and BER are calculated for both the proposed design and OFDM. The results of the improved system show a massive improvement in performance ver the conventional system and significant improvements with massive MIMO, including the best throughput and BER. When compared to conventional systems, the improved system has a throughput that is around 22% higher and the best performance in terms of BER, but it still has around 25% less error than OFDM.
Reflector antenna design in different frequencies using frequency selective s...TELKOMNIKA JOURNAL
In this study, it is aimed to obtain two different asymmetric radiation patterns obtained from antennas in the shape of the cross-section of a parabolic reflector (fan blade type antennas) and antennas with cosecant-square radiation characteristics at two different frequencies from a single antenna. For this purpose, firstly, a fan blade type antenna design will be made, and then the reflective surface of this antenna will be completed to the shape of the reflective surface of the antenna with the cosecant-square radiation characteristic with the frequency selective surface designed to provide the characteristics suitable for the purpose. The frequency selective surface designed and it provides the perfect transmission as possible at 4 GHz operating frequency, while it will act as a band-quenching filter for electromagnetic waves at 5 GHz operating frequency and will be a reflective surface. Thanks to this frequency selective surface to be used as a reflective surface in the antenna, a fan blade type radiation characteristic at 4 GHz operating frequency will be obtained, while a cosecant-square radiation characteristic at 5 GHz operating frequency will be obtained.
Reagentless iron detection in water based on unclad fiber optical sensorTELKOMNIKA JOURNAL
A simple and low-cost fiber based optical sensor for iron detection is demonstrated in this paper. The sensor head consist of an unclad optical fiber with the unclad length of 1 cm and it has a straight structure. Results obtained shows a linear relationship between the output light intensity and iron concentration, illustrating the functionality of this iron optical sensor. Based on the experimental results, the sensitivity and linearity are achieved at 0.0328/ppm and 0.9824 respectively at the wavelength of 690 nm. With the same wavelength, other performance parameters are also studied. Resolution and limit of detection (LOD) are found to be 0.3049 ppm and 0.0755 ppm correspondingly. This iron sensor is advantageous in that it does not require any reagent for detection, enabling it to be simpler and cost-effective in the implementation of the iron sensing.
Impact of CuS counter electrode calcination temperature on quantum dot sensit...TELKOMNIKA JOURNAL
In place of the commercial Pt electrode used in quantum sensitized solar cells, the low-cost CuS cathode is created using electrophoresis. High resolution scanning electron microscopy and X-ray diffraction were used to analyze the structure and morphology of structural cubic samples with diameters ranging from 40 nm to 200 nm. The conversion efficiency of solar cells is significantly impacted by the calcination temperatures of cathodes at 100 °C, 120 °C, 150 °C, and 180 °C under vacuum. The fluorine doped tin oxide (FTO)/CuS cathode electrode reached a maximum efficiency of 3.89% when it was calcined at 120 °C. Compared to other temperature combinations, CuS nanoparticles crystallize at 120 °C, which lowers resistance while increasing electron lifetime.
In place of the commercial Pt electrode used in quantum sensitized solar cells, the low-cost CuS cathode is created using electrophoresis. High resolution scanning electron microscopy and X-ray diffraction were used to analyze the structure and morphology of structural cubic samples with diameters ranging from 40 nm to 200 nm. The conversion efficiency of solar cells is significantly impacted by the calcination temperatures of cathodes at 100 °C, 120 °C, 150 °C, and 180 °C under vacuum. The fluorine doped tin oxide (FTO)/CuS cathode electrode reached a maximum efficiency of 3.89% when it was calcined at 120 °C. Compared to other temperature combinations, CuS nanoparticles crystallize at 120 °C, which lowers resistance while increasing electron lifetime.
A progressive learning for structural tolerance online sequential extreme lea...TELKOMNIKA JOURNAL
This article discusses the progressive learning for structural tolerance online sequential extreme learning machine (PSTOS-ELM). PSTOS-ELM can save robust accuracy while updating the new data and the new class data on the online training situation. The robustness accuracy arises from using the householder block exact QR decomposition recursive least squares (HBQRD-RLS) of the PSTOS-ELM. This method is suitable for applications that have data streaming and often have new class data. Our experiment compares the PSTOS-ELM accuracy and accuracy robustness while data is updating with the batch-extreme learning machine (ELM) and structural tolerance online sequential extreme learning machine (STOS-ELM) that both must retrain the data in a new class data case. The experimental results show that PSTOS-ELM has accuracy and robustness comparable to ELM and STOS-ELM while also can update new class data immediately.
Electroencephalography-based brain-computer interface using neural networksTELKOMNIKA JOURNAL
This study aimed to develop a brain-computer interface that can control an electric wheelchair using electroencephalography (EEG) signals. First, we used the Mind Wave Mobile 2 device to capture raw EEG signals from the surface of the scalp. The signals were transformed into the frequency domain using fast Fourier transform (FFT) and filtered to monitor changes in attention and relaxation. Next, we performed time and frequency domain analyses to identify features for five eye gestures: opened, closed, blink per second, double blink, and lookup. The base state was the opened-eyes gesture, and we compared the features of the remaining four action gestures to the base state to identify potential gestures. We then built a multilayer neural network to classify these features into five signals that control the wheelchair’s movement. Finally, we designed an experimental wheelchair system to test the effectiveness of the proposed approach. The results demonstrate that the EEG classification was highly accurate and computationally efficient. Moreover, the average performance of the brain-controlled wheelchair system was over 75% across different individuals, which suggests the feasibility of this approach.
Adaptive segmentation algorithm based on level set model in medical imagingTELKOMNIKA JOURNAL
For image segmentation, level set models are frequently employed. It offer best solution to overcome the main limitations of deformable parametric models. However, the challenge when applying those models in medical images stills deal with removing blurs in image edges which directly affects the edge indicator function, leads to not adaptively segmenting images and causes a wrong analysis of pathologies wich prevents to conclude a correct diagnosis. To overcome such issues, an effective process is suggested by simultaneously modelling and solving systems’ two-dimensional partial differential equations (PDE). The first PDE equation allows restoration using Euler’s equation similar to an anisotropic smoothing based on a regularized Perona and Malik filter that eliminates noise while preserving edge information in accordance with detected contours in the second equation that segments the image based on the first equation solutions. This approach allows developing a new algorithm which overcome the studied model drawbacks. Results of the proposed method give clear segments that can be applied to any application. Experiments on many medical images in particular blurry images with high information losses, demonstrate that the developed approach produces superior segmentation results in terms of quantity and quality compared to other models already presented in previeous works.
Automatic channel selection using shuffled frog leaping algorithm for EEG bas...TELKOMNIKA JOURNAL
Drug addiction is a complex neurobiological disorder that necessitates comprehensive treatment of both the body and mind. It is categorized as a brain disorder due to its impact on the brain. Various methods such as electroencephalography (EEG), functional magnetic resonance imaging (FMRI), and magnetoencephalography (MEG) can capture brain activities and structures. EEG signals provide valuable insights into neurological disorders, including drug addiction. Accurate classification of drug addiction from EEG signals relies on appropriate features and channel selection. Choosing the right EEG channels is essential to reduce computational costs and mitigate the risk of overfitting associated with using all available channels. To address the challenge of optimal channel selection in addiction detection from EEG signals, this work employs the shuffled frog leaping algorithm (SFLA). SFLA facilitates the selection of appropriate channels, leading to improved accuracy. Wavelet features extracted from the selected input channel signals are then analyzed using various machine learning classifiers to detect addiction. Experimental results indicate that after selecting features from the appropriate channels, classification accuracy significantly increased across all classifiers. Particularly, the multi-layer perceptron (MLP) classifier combined with SFLA demonstrated a remarkable accuracy improvement of 15.78% while reducing time complexity.
Use PyCharm for remote debugging of WSL on a Windo cf5c162d672e4e58b4dde5d797...shadow0702a
This document serves as a comprehensive step-by-step guide on how to effectively use PyCharm for remote debugging of the Windows Subsystem for Linux (WSL) on a local Windows machine. It meticulously outlines several critical steps in the process, starting with the crucial task of enabling permissions, followed by the installation and configuration of WSL.
The guide then proceeds to explain how to set up the SSH service within the WSL environment, an integral part of the process. Alongside this, it also provides detailed instructions on how to modify the inbound rules of the Windows firewall to facilitate the process, ensuring that there are no connectivity issues that could potentially hinder the debugging process.
The document further emphasizes on the importance of checking the connection between the Windows and WSL environments, providing instructions on how to ensure that the connection is optimal and ready for remote debugging.
It also offers an in-depth guide on how to configure the WSL interpreter and files within the PyCharm environment. This is essential for ensuring that the debugging process is set up correctly and that the program can be run effectively within the WSL terminal.
Additionally, the document provides guidance on how to set up breakpoints for debugging, a fundamental aspect of the debugging process which allows the developer to stop the execution of their code at certain points and inspect their program at those stages.
Finally, the document concludes by providing a link to a reference blog. This blog offers additional information and guidance on configuring the remote Python interpreter in PyCharm, providing the reader with a well-rounded understanding of the process.
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressionsVictor Morales
K8sGPT is a tool that analyzes and diagnoses Kubernetes clusters. This presentation was used to share the requirements and dependencies to deploy K8sGPT in a local environment.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
Discover the latest insights on Data Driven Maintenance with our comprehensive webinar presentation. Learn about traditional maintenance challenges, the right approach to utilizing data, and the benefits of adopting a Data Driven Maintenance strategy. Explore real-world examples, industry best practices, and innovative solutions like FMECA and the D3M model. This presentation, led by expert Jules Oudmans, is essential for asset owners looking to optimize their maintenance processes and leverage digital technologies for improved efficiency and performance. Download now to stay ahead in the evolving maintenance landscape.
The CBC machine is a common diagnostic tool used by doctors to measure a patient's red blood cell count, white blood cell count and platelet count. The machine uses a small sample of the patient's blood, which is then placed into special tubes and analyzed. The results of the analysis are then displayed on a screen for the doctor to review. The CBC machine is an important tool for diagnosing various conditions, such as anemia, infection and leukemia. It can also help to monitor a patient's response to treatment.
CHINA’S GEO-ECONOMIC OUTREACH IN CENTRAL ASIAN COUNTRIES AND FUTURE PROSPECTjpsjournal1
The rivalry between prominent international actors for dominance over Central Asia's hydrocarbon
reserves and the ancient silk trade route, along with China's diplomatic endeavours in the area, has been
referred to as the "New Great Game." This research centres on the power struggle, considering
geopolitical, geostrategic, and geoeconomic variables. Topics including trade, political hegemony, oil
politics, and conventional and nontraditional security are all explored and explained by the researcher.
Using Mackinder's Heartland, Spykman Rimland, and Hegemonic Stability theories, examines China's role
in Central Asia. This study adheres to the empirical epistemological method and has taken care of
objectivity. This study analyze primary and secondary research documents critically to elaborate role of
china’s geo economic outreach in central Asian countries and its future prospect. China is thriving in trade,
pipeline politics, and winning states, according to this study, thanks to important instruments like the
Shanghai Cooperation Organisation and the Belt and Road Economic Initiative. According to this study,
China is seeing significant success in commerce, pipeline politics, and gaining influence on other
governments. This success may be attributed to the effective utilisation of key tools such as the Shanghai
Cooperation Organisation and the Belt and Road Economic Initiative.
Comparative analysis between traditional aquaponics and reconstructed aquapon...bijceesjournal
The aquaponic system of planting is a method that does not require soil usage. It is a method that only needs water, fish, lava rocks (a substitute for soil), and plants. Aquaponic systems are sustainable and environmentally friendly. Its use not only helps to plant in small spaces but also helps reduce artificial chemical use and minimizes excess water use, as aquaponics consumes 90% less water than soil-based gardening. The study applied a descriptive and experimental design to assess and compare conventional and reconstructed aquaponic methods for reproducing tomatoes. The researchers created an observation checklist to determine the significant factors of the study. The study aims to determine the significant difference between traditional aquaponics and reconstructed aquaponics systems propagating tomatoes in terms of height, weight, girth, and number of fruits. The reconstructed aquaponics system’s higher growth yield results in a much more nourished crop than the traditional aquaponics system. It is superior in its number of fruits, height, weight, and girth measurement. Moreover, the reconstructed aquaponics system is proven to eliminate all the hindrances present in the traditional aquaponics system, which are overcrowding of fish, algae growth, pest problems, contaminated water, and dead fish.
2. ISSN: 1693-6930
TELKOMNIKA Vol. 17, No. 1, February 2019: 463-472
464
feature extraction is done manually. The features appropriate for the classification process. In
addition to conventional methods, the ability to perform classification is limited in the number of
classes.
(a) (b)
Figure 1. (a) New vessel elsewhere, (b) New Vessel on the disc. data from retina image bank
Currently, there has been growing research on deep learning. Deep learning can be
utilized in image processing. Research on the image classification using deep learning has been
widely generated. The study about Deep Learning classification on diabetic retinopathy has
been developed. Pratt et al. classified five classes from the DR level. DR levels are no DR, Mild
DR, Moderate DR, Severe DR, and PDR. Data using 80,000 for training and 5,000 for validation
of Kaggle dataset. CNN architecture consists ten convolution layers and three fully connected
layer. The input data size is 512x512 pixel. It is used preprocessing with color normalization.
The experimental results obtained accuracy 75%, specificity 95%, sensitivity 30% [11] .
Lee et al. classified two classes of normal and Age-related Macular Degeneration
(AMD). The data used is OCT photos. The data consisted of 52,690 normal and 48,312 AMD.
The method used is CNN with a modified VGG16 model. The system consists of 13 layers of
convolution, and 3 Fully Connected Layer (FCL). The results showed an accuracy of 87.63% to
93.45% [12].
Takahashi et al. classified three classes of DR. Levels of DR are Simple Diabetic
Retinopathy (SDR), Pre-Proliferative DR (PPDR) and PDR. Data consists of 9,939 fundus
images from 2,740 patients. Modified GoogLeNet is applied for the classification test. Modified
GoogLeNet uses crop image size 1272x1272 pixel and 4 batch size reduction. This research
obtained accuracy 81% [13].
Lam et al. proposed the classification of lesions on the fundus. There are 5 classes of
hemorrhages, microaneurysms, exudates, neovascularization and normal. The data from kaggle
dataset consists of 1324 patches, 243 images. Validation uses an eOphta dataset composed of
148 patches microaneurysms and 47 patches exudates. Methods using CNN with AlexNet,
VGG16, GoogleNet, ResNet and Inception-V3 models. The results show an accuracy of 74% to
96% [14].
Research on the specific classification of neovascularization using deep learning has
not been done. Classification of two classes includes NVE & NVD. Until now, the classification
was recognized the grade of DR. Research has not achieved maximum accuracy, so there is
still a chance to get better accuracy.
This paper proposes a classification of fundus image for normal and neovascularization.
The method was the CNN model include AlexNet, VGG16, VGG19, ResNet50, and GoogLeNet.
It also using classification method include Support Vector Machine (SVM), Naïve Bayes
Classifier (NBC), k-Nearest neighbor (k-NN), Discriminant Analysis, and Decision Tree. Data
was taken form MESSIDOR and Retina Image Bank. Data was an RGB image without any
pre-processing method. Experiments were carried out on a single Graphical Processing Unit
(GPU).
2. Research Method
2.1. Deep Learning
Deep learning is part of machine learning. Deep learning can be applied to big data.
Deep learning is capable of performing supervised and unsupervised learning. Deep learning
3. TELKOMNIKA ISSN: 1693-6930
Classification of neovascularization using convolutional neural... (Wahyudi Setiawan)
465
consists of many nonlinear layers. Usefulness of deep learning such as for pattern analysis and
classification. The data analyzed can be image, text, and sound. Deep Learning methods i.e.
the Convolutional Neural Network (CNN), Deep Belief Network (DBN), Stack Auto Encoder
(SAE) and Convolutional Auto Encoder (CAE) [15-17].
2.2. Convolutional Neural Network
Convolutional Neural Network (CNN) is a method of Deep Learning. CNN is usually
used for classification, segmentation and image analysis. The CNN architecture has several
layers. The Layer consists of the convolution layer with the size of stride and zero padding, max
pooling layer and fully connected layer. The study used an AlexNet model which has 25
layers [18]. Figure 2 show CNN fundus pipeline.
Figure 2. CNN fundus pipeline
2.2.1. Convolution Layers
Convolution is the multiplication operation between the input matrix and the filter matrix.
Commonly filters used include identity operations, edge detection, sharpen, blur box and
Gaussian blur. The convolution process between input matrix and filter are shown
in Figure 3 [19].
Figure 3. Input layer, filter and convolution layer
The Convolution process includes input layer, filter, and convolution layer. Volume
dimension input, filter dimension and volume dimension output show in (1–3).
𝑉𝑖𝑛𝑝𝑢𝑡 = ℎ × 𝑤 × 𝑑 (1)
here, image matrix: ℎ = heigh; 𝑤 = width; 𝑑 = dimension; 𝑉𝑖𝑛𝑝𝑢𝑡 = volume dimension input
𝑓 = 𝑓ℎ × 𝑓𝑑 (2)
filter matrix: 𝑓 = filter ; 𝑓ℎ = filter of high; 𝑓𝑑 = filter of dimension
𝑉𝑜𝑢𝑡𝑝𝑢𝑡 = (ℎ − 𝑓ℎ + 1) 𝑥 (𝑓 − 𝑓𝑤 + 1) × 1 (3)
convolution layer: 𝑉𝑜𝑢𝑡𝑝𝑢𝑡 = volume dimension output
4. ISSN: 1693-6930
TELKOMNIKA Vol. 17, No. 1, February 2019: 463-472
466
2.2.2. Stride
Stride is the number of shifts made during the convolution process in the image matrix.
If the value is stride 1, then the convolution process shifts 1 pixel. If the value of the value of
stride 2, then the process of convolution shifted 2 pixels and so on. Shifts can occur both
horizontally and vertically [20]. Illustration of stride 2 horizontal is shown in Figure 4, matrix
shifts horizontally.
Figure 4. Stride 2 horizontal
2.2.3. Padding
Padding is done when the image matrix does not match the filter matrix size. Usually
occurs when doing convolution on the edge of the image. There are two options [20] :
1. Fixed in a condition of image matrix value called valid padding
2. Replace the value in the image matrix with zero (zero padding). Zero padding of size 2 is
shown in Figure 5.
Figure 5. Zero padding of size 2
2.2.4. ReLU
Rectified Linear Unit (ReLU) is a step to change the output. If the output is negative
then, ReLU will convert it to zero. The output of ReLU shown in (4) [19]. ReLU operation is
shown in Figure 6. A negative value on matrix become zero.
𝑓(𝑥) = max (0, 𝑥) (4)
Figure 6. ReLU operation
2.2.5. Pooling layer
Pooling layer is a layer that reduces the image dimension matrix. Spatial pooling has
3 types of max-pooling, average-pooling, and sum-pooling. Max-pooling is the common used of
special pooling [21]. Max Pooling operation is shown in Figure 7.
5. TELKOMNIKA ISSN: 1693-6930
Classification of neovascularization using convolutional neural... (Wahyudi Setiawan)
467
Figure 7. Max pooling
2.2.6. Fully Connected Layer
Fully Connected Layer (FCL) is a layer similar to a neural network. The input of FCL is
the image features converted to vector. Furthermore, the activation function will classify
the test image as a specific class image. FCL is the final learning phase that functions as a
classification [22].
2.2.7. Dropout
Dropout is a control for the overfitting process on a neural network. Overfitting occurs
when the system takes all features including noise. Dropout stage is very simple and can be
useful to handle overfitting problems [23]. In the dropout layer, the options for the dropout unit
are done randomly. The dropout illustration is shown in Figure 8.
(a) (b)
Figure 8. (a) Standard Neural Net, (b) Applying Dropout
2.2.8. Softmax
Softmax is done in the final stages of FCL. Softmax works like an activation layer.
Softmax is used in multiclass classification. Softmax generates probability values for each class.
The highest probability value indicates the class of the object input [21]. The activation function
of softmax shown in (5).
𝜎(𝑎𝑗) =
𝑒𝑥𝑝
𝑎 𝑗
∑ exp 𝑎 𝑘
𝑚
𝑘=1
(5)
here, 𝑎 = [𝑎1, … , 𝑎 𝑚] 𝑇
is a vector with 𝑚 elements. We can check that ∑ 𝜎 (𝑎𝑗
𝑚
𝑗=1 ) = 1 for
softmax.
2.2.9. System Architecture
System Architecture using AlexNet 25-layer model. The model consists of 5-layer
convolutions each measuring 11x11, 5x5, and three 3x3-sized convolution layers. Layer
convolution is done by stride and padding process. Among the layer's convolutions are ReLU,
Normalization and Max Pooling. Next 2 Fully Connected Layer with ReLU, Max Pooling and
Dropout 0.5. In the end, there is a Fully Connected Layer with Softmax. The output will classify
the two classes of Normal and Neovascularization [18]. The system architecture is shown
in Figure 9.
6. ISSN: 1693-6930
TELKOMNIKA Vol. 17, No. 1, February 2019: 463-472
468
.
Figure 9. System architecture
Another CNN models that used in the experiment are VGG16, VGG19 [24],
ResNet50 [25], and GoogleNet [26]. The models consist of 41, 47, 177, and 144 layers. The
difference between AlexNet and another CNN models are image input size, type, and sequence
of the layer. AlexNet have 227x227 image input size, another CNN models have 224x224. FCL
in Alexnet, VGG16, VGG19 is ‘fc7’. FCL in ResNet50 is ‘fc1000’. FCL in GoogLeNet is ‘loss3-
classifier’.
2.3. Data Acquisition
The data acquisition is from the public database of MESSIDOR and Retina Image Bank.
The MESSIDOR has been commonly used as a classification and analysis of diabetic
retinopathy. However, the number of neovascularization images is relatively small from the total
number of 1,200 available. The MESSIDOR dataset can be accessed via
http://www.adcis.net/en/Download-Third-Party/Messidor.html [27]. While Retina Image Bank is a
fundus data set made by the American Society of Retina Specialist. In Retina Image Bank
dataset there are 23,650 image fundus data. The fundus is accompanied by a description of
various disorders of retinal disease including neovascularization. Retina Image Bank dataset
can be accessed via http://imagebank.asrs.org/home.
The testing data consist of patches. Patches obtained through manual cropping of
images on the optic disk and other retinal surfaces. Figure 10 shows the neovascularization
patches that used in this research.
7. TELKOMNIKA ISSN: 1693-6930
Classification of neovascularization using convolutional neural... (Wahyudi Setiawan)
469
(a) (b)
Figure 10. (a) Representative of neovascularization patches, (b) Normal patches
2.4. Implementation
The stages of implementation are explained below:
1. Prepare a dataset consisting of normal images and neovascularization. Perform manual
cropping to get patches of images on the optic disc and other retinal surfaces.
2. Normalize the patches size to 227x227 pixel. The pixel size is the default of the AlexNet input
layer.
3. Determine the type and number of layers used in the CNN architecture. The type and
number of layers used in this study are shown in Figure 9.
4. Determine the number of patches used for the training process. If the number of images in
the datastore is greater than the amount of training data used, the system selects random
patches used for the training process.
5. Conduct training data labeling. Labeling will look as follows:
[[The array of patches], [label of normal or neovascularization]]
6. Feature extraction from the training dataset. Feature extraction is found in the activation
process. Specify the ReLU activation layer used for feature extraction. ReLU is on the 7th
layer of the CNN architecture used.
7. Determine the amount of data in the testing process.
8. Perform feature extraction on dataset testing. The feature extraction process for dataset
testing is similar to the feature extraction of the training dataset at step 6.
9. Classification using the Support Vector Machine (SVM) [28], Evaluate performance with a
percentage of accuracy.
2.5. Cross-validation
Cross-validation uses k-fold cross-validation with k=10. Cross-validation divides the
data into ten parts with the same number of each part. Nine parts for training and apart for
validation. Then select another part as validation and nine other parts as training, and
so on [29]. The full scheme of cross-validation is shown in Figure 11.
Figure 11. 10-Fold cross validation scheme
8. ISSN: 1693-6930
TELKOMNIKA Vol. 17, No. 1, February 2019: 463-472
470
the performance measure of cross-validation used is a linear loss. A linear loss is shown in (6)
𝐿 =
∑ −𝑤 𝑗𝑗 𝑦 𝑗 𝑓 𝑗
∑ 𝑤 𝑗𝑗
(6)
herewith 𝑤𝑗 = weight for observation j; 𝑦𝑗 = response j ; 𝑓𝑗 = classification score
3. Results and Discussion
The experiment performed on single GPU Core i7 7700, MSI Z270 a pro, GTX 1060 6
GB D5 amp, DDR4 16 GB, PSU 550 w and SSD 60 GB Golden Memory. Experiment scenarios
are needed to get optimal measurement results. Experiment scenario as follows:
1. Data is divided into 2, training data and testing data. Selection of training data and testing
data is done randomly. Experiment using AlexNet, VGG16, VGG19, ResNet50, and
GoogLeNet. Classification method using SVM as described in the implementation section.
2. Comparison of accuracy results using classification methods include Support Vector
Machine (SVM), k-Nearest Neighbor (k-NN), Naïve Bayes Classifier (NBC), Discriminant
Analysis, and Decision Tree. Training data that used for the experiment is 70, testing data
is 30.
3. Comparison of linear loss cross validation on second scenario.
The results of the 1
st
scenario is shown in Table 1. The experiment was carried out with
the number of training and testing data varied. Accuracy is between 90% and 100%. The best
average accuracy used VGG19 97.22%.
Furthermore, the 2
nd
scenario using the classification methods such as SVM, k-Nearest
Neighbor, Naïve Bayes Classifier, Discriminant analysis and Decision Tree. It’s also compared
CNN models. The results show the highest accuracy was obtained using the VGG16+SVM,
VGG19+kNN, and ResNet50+ SVM. Accuracy up to 100%. The result of the 2
nd
scenario shows
in Table 2.
The 3
rd
scenario is a validation of accuracy in Validation is carried out on testing data
using the classification method. Validation using linear loss 10-fold cross-validation. Table 3
shows the linear loss is 0-26.67%.
Table 1. A Classification Accuracy Using SVM
Experiment
no.
Training
Data
Testing
Data
CNN model
AlexNet VGG16 VGG19 ResNet50 GoogLeNet
1 70 30 96.67 100 96.67 100 96.67
2 80 20 95 90 95 100 90
3 90 10 90 100 100 90 90
average 93.89 95.56 97.22 95.56 91.11
Table 2. Comparison of Accuracy Performance between CNN Model
Classification Method
CNN Model
AlexNet VGG16 VGG19 ResNet50 GoogLeNet
SVM 96.67 100 96.67 100 96.67
k-NN 90 96.67 100 96.67 96.67
Naïve Bayes 93.33 - - 96.67 90
Discriminant Analysis 93.33 - - 93.33 93.33
Decision Tree 96.67 93.33 93.33 90 93.33
Table 3. Linear Loss Cross-validation
Classification Method
CNN Model
AlexNet VGG16 VGG19 ResNet50 GoogLeNet
SVM 3.33 6.67 13.33 3.33 3.33
kNN 10 0 3.33 0 6.67
Naïve Bayes 13.33 - - 3.33 10
Discriminant analysis 13.33 - - 3.33 16.67
Decision Tree 16.67 26.67 3 3.67 10
9. TELKOMNIKA ISSN: 1693-6930
Classification of neovascularization using convolutional neural... (Wahyudi Setiawan)
471
The results show the factors that influence the percentage of classification accuracy are
the number of data training & testing, the classification method and the CNN model. While the
factors that influence the results of linear loss are the classification method and the CNN model.
The best CNN model in this experiment is ResNet50. The optimal accuracy using SVM
classification method. The optimal linear loss is using k-NN.
The experiment needs to find out the differences of the result study with previous
studies. In the previous study, Image size used for the training and testing phase between
128x128 pixels to 1272x1272 pixels. The number of classes classified is two and five classes.
Layers that are used for CNN are 21, 22 and 45. This research is carried out in two classes of
the fundus image. The image size used for the classification process is 227x227 pixels
according to the default input layer on Alexnet. The number of layers used for CNN is 25. The
results are shown in Table 4.
Table 4. Comparison of Classification Fundus Research using CNN
Authors Classes Size of images Layers Processing Unit Accuracy.
Pratt et al., [11] 5 classes 512x512 45 GPU 75
Lee et al. [12] 2 classes - 21 GPU 87.63 –
93.45
Takahashi et al. [13] 2 classes 1272x1272 22 GPU 81
Lam et al. [14] 5 classes 128x128 22 GPU 74 – 96
Proposed method 2 classes 227x227 25 GPU 90 – 100
4. Conclusion
Classification of two fundus classes is normal and neovascularization has been carried
out. Experiment using CNN models are AlexNet, VGG16, VGG19, ResNet50, and GoogleNet.
Classification method using Support Vector Machine, Naïve Bayes, k-Nearest Neighbor,
Discriminant Analysis, and Decision Tree. The results of the experiment showed an accuracy of
90-100% with loss cross-validation of 0-26.67%.
For further work, we can create our CNN models. Measurement results can be compared with
existing CNN models.
Acknowledgments
We would like to acknowledge the Ministry of Research, Technology and Higher
Education, Indonesia for funding this research. We also thanks to MESSIDOR and Retinal
Image Bank dataset as experiment data.
References
[1] Nentwich M.M, Ulbig M.W. Diabetic Retinopathy- Ocular Complications of Diabetes Mellitus. World
Journal of Diabetes. 2012; 6(3): 489–499.
[2] Mathur R, Bhaskaran K, Edwards E, Lee H, Chaturvedi N, Smeeth L, Douglas I. Population trends in
the 10-year incidence and prevalence of diabetic retinopathy in the UK: A cohort study in the Clinical
Practice Research Datalink 2004-2014. BMJ Open. 2017; 7(2): 1–11.
[3] Sivaprasad S, Gupta B, Crosby-nwaobi R, Evans J. Public Health And The Eye Prevalence of
Diabetic Retinopathy in Various Ethnic Groups : A Worldwide Perspective. Surv Ophthalmol, Elsevier
Inc. 2012; 57(4): 347–70.
[4] Novita H, Moestijab. Optical Coherence Tomography ( OCT ). Indonesian Ophhalmology Journal.
2008; 6(3): 169–177.
[5] Ilyas S. Basics of Examination Technique in Eye Disease. 4
th
ed. Jakarta: Medical Faculty. University
of Indonesia; 2012.
[6] Jelinek H.F, Cree M.J, Leandro J.J.G, Soares J.V.B, Cesar R.M, Luckie A. Automated Segmentation
Of Retinal Blood Vessels And Identification Of Proliferative Diabetic Retinopathy. J Opt Soc Am A
Opt Image Sci Vis. 2007; 24(5): 1448–56.
[7] Goatman K.A, Fleming A.D, Philip S, Williams G.J, Olson J.A, Sharp P.F. Detection of New Vessels
On The Optic Disc Using Retinal Photographs. IEEE Trans Med Imaging. 2011; 30(4): 972–9.
[8] Akram MU, Khalid S, Tariq A, Javed MY. Detection of neovascularization in retinal images using
multivariate m-Mediods based classifier. Comput Med Imaging Graph, Elsevier Ltd. 2013; 37(5-6):
346-357. Available from: http://dx.doi.org/10.1016/j.compmedimag.2013.06.008.
[9] Welikala R.A, Fraz M.M, Dehmeshki J, Hoppe A, Tah V, Mann S, Williamson T.H, Barman S.A.
Computerized Medical Imaging and Graphics Genetic Algorithm Based Feature Selection Combined
10. ISSN: 1693-6930
TELKOMNIKA Vol. 17, No. 1, February 2019: 463-472
472
With Dual Classification For The Automated Detection Of Proliferative Diabetic Retinopathy. Comput
Med Imaging Graph, Elsevier Ltd. 2015; 43: 64–77.
[10] Gupta G, Kulasekaran S, Ram K, Joshi N, Sivaprakasam M, Gandhi R. Computerized Medical
Imaging and Graphics Local characterization of neovascularization and identification of proliferative
diabetic retinopathy in retinal fundus images. Comput Med Imaging Graph, Elsevier Ltd. 2017; 55:
124–32.
[11] Pratt H, Coenen F, Broadbent D.M, Harding S.P, Zheng Y. Convolutional Neural Networks for
Diabetic Retinopathy. Procedia Comput Sci MIUA (2016): 1–6.
[12] Lee C.S, Baughman D.M, and Lee A.Y. Deep Learning Is Effective for the Classification of OCT
Images of Normal versus Age-Related Macular Degeneration. Ophthamol. Retina. 2016; 1: 322–327.
[13] Takahashi H, Tampo H, Arai Y, Inoue Y, Kawashima H. Applying artifial Intelligence to disease
staging : Deep learning for improved staging of Diabetic Retinopathy. PLoS ONE. 2017; 12(6):
e0179790.
[14] Lam C, Yu C, Huang L, Rubin D. Retinal lesion detection with deep learning using image patches.
Invest Ophthalmol Vis Sci. 2018; 59: 590-596.
[15] Deng L, Yu D. Deep Learning Method and Application. Redmond, Washington : Now Publisher.
2014: 6-7.
[16] Wang R, Zhang J, Dong W, Yu J, Xie C.J, Li R, Chen T, Chen H. A Crop Pests Image Classification
Algorithm Based on Deep Convolutional Neural Network. TELKOMNIKA Telecommunication
Computing Electronics and Control. 2017; 15 (3): 1239–1246.
[17] Baharin A, Abdullah A, Yousoff S N M. Prediction of Bioprocess Production Using Deep Neural
Network Method, TELKOMNIKA Telecommunication Computing Electronics and Control. 2017;
15(2): 805–813.
[18] Krizhevsky A, Sutskever I, Hinton G. Imagenet classification with deep convolutional neural networks.
Advances in Neural Information Processing Systems 25. 2012.
[19] Wang C, Xi Y. Convolutional Neural Network for Image Classification. Johns Hopkins University
Baltimore, MD 21218, cs.jhu: ~cwang107.
[20] Dumoulin V, Visin F. A guide to convolution arithmetic for deep learning. Université de Montréal &
Politecnico di Milano. 2018: 8-15.
[21] Nagi J, Ducatelle F, Di Caro G, Ciresan D, Meier U, Giusti A, Nagi F, Schmidhuber J, and
Gambardella L. Max-pooling convolutional neural networks for vision-based hand gesture
recognition. In Proceedings of the IEEE International Conference on Signal and Image Processing
Applications. 2011; 342–347.
[22] Sainath T.N, Vinyals O, Senior A, Sak H. Convolutional, long short-term memory, fully connected
deep neural networks. In ICASSP. 2015.
[23] Wu H, Gu X. Towards dropout training for convolutional neural networks. Neural Networks. 2015;
71: 1–10.
[24] Simonyan K, Zisserman A. Very Deep Convolutional Networks for Large-Scale Image Recognition.
2014;1–14. Available from: http://arxiv.org/abs/1409.1556
[25] He K, Sun J. Deep Residual Learning for Image Recognition. 2015; 1–9. Available from : https://www.
cv-foundation.org/openaccess/content_cvpr_2016/papers/He_Deep_Residual_LearningVPR_2016_
paper.pdf.
[26] Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, et al. Going Deeper with Convolutions.
2014; 1–9. Available from : https://www.cs.unc.edu/~wliu/papers/GoogLeNet.pdf.
[27] Decencière E, Zhang X, Cazuguel G, Lay B, Cochener B, Trone C, Gain P, Ordonez R, Massin P,
Erginay A, Charton B, Klein JC. Feedback On A Publicly Distributed Image Database: The Messidor
Database. Image Analysis and Stereology. 2014; 33(3): 231-234.
[28] Manning C.D, Raghavan P, Schütze H. Introduction to Information Retrieval Introduction.
Computational Linguistics. 2008; 35: 234-264.
[29] Dudoit S. Loss-Based Estimation with Cross-Validation : Applications to Microarray Data Analysis and
Motif Finding. Biostatistics. 2003; 5(2): 56–68.