Taking into account how brain tumors and gliomas are notorious forms of cancer, the medical field has found several methods to diagnose these
diseases, with many algorithms that can segment out the cancer cells in the magnetic resonance imaging (MRI) scans of the brain. This paper has proposed a similar segmenting algorithm called a custom administering attention module. This solution uses a custom U-Net model along with a custom administering attention module that uses an attention mechanism to classify and segment the glioma cells using long-range dependency of the
feature maps. The customizations lead to a reduction in code complexity and memory cost. The final model has been tested on the BraTS 2019 dataset and has been compared with other state-of-the-art methods for displaying how much better the proposed model has performed in the category of enhancing,
non-enhancing and peritumoral gliomas.
Dilated Inception U-Net for Nuclei Segmentation in Multi-Organ Histology ImagesIRJET Journal
The document summarizes a study that used a Dilated Inception U-Net model for nuclei segmentation in histology images. Key points:
1. A Dilated Inception U-Net model was used to segment nuclei in histology images, which employs dilated convolutions to efficiently generate feature maps over a large input area.
2. The model was tested on the MoNuSeg dataset containing H&E stained images. Preprocessing included color normalization, data augmentation, and extracting 256x256 patches.
3. The Dilated Inception U-Net modifies the classic U-Net by replacing convolutional blocks with dilated inception blocks containing 1x1 and 3x3 filters with different dilation rates, allowing it to
Dual Tree Complex Wavelet Transform, Probabilistic Neural Network and Fuzzy C...IJAEMSJORNAL
The venture suggests an Adhoc technique of MRI brain image classification and image segmentation tactic. It is a programmed structure for phase classification using learning mechanism and to sense the Brain Tumor through spatial fuzzy clustering methods for bio medical applications. Automated classification and recognition of tumors in diverse MRI images is enthused for the high precision when dealing with human life. Our proposal employs a segmentation technique, Spatial Fuzzy Clustering Algorithm, for segmenting MRI images to diagnose the Brain Tumor in its earlier phase for scrutinizing the anatomical makeup. The Artificial Neural Network (ANN) will be exploited to categorize the pretentious tumor part in the brain. Dual Tree-CWT decomposition scheme is utilized for texture scrutiny of an image. Probabilistic Neural Network (PNN)-Radial Basis Function (RBF) will be engaged to execute an automated Brain Tumor classification. The preprocessing steps were operated in two phases: feature mining by means of classification via PNN-RBF network. The functioning of the classifier was assessed with the training performance and classification accuracies.
The document summarizes research on medical image segmentation algorithms. It discusses k-means clustering, fuzzy c-means clustering, and proposes enhancements to these algorithms. Specifically, it introduces an enhanced k-means algorithm that improves initial cluster center selection. It also presents a kernelized fuzzy c-means approach that maps data points into a feature space to perform clustering. The algorithms are tested on MRI brain images and evaluated based on segmentation accuracy. The enhanced methods aim to produce more precise segmentations for medical applications such as diagnosis and treatment planning.
SEGMENTATION OF MAGNETIC RESONANCE BRAIN TUMOR USING INTEGRATED FUZZY K-MEANS...ijcsit
Segmentation is a process of partitioning the image into several objects. It plays a vital role in many fields
such as satellite, remote sensing, object identification, face tracking and most importantly in medical field.
In radiology, magnetic resonance imaging (MRI) is used to investigate the human body processes and
functions of organisms. In hospitals, this technique has been using widely for medical diagnosis, to find the
disease stage and follow-up without exposure to ionizing radiation.Here in this paper, we proposed a novel
MR brain image segmentation method for detecting the tumor and finding the tumor area with improved
performance over conventional segmentation techniques such as fuzzy c means (FCM), K-means and even
that of manual segmentation in terms of precision time and accuracy. Simulation performance shows that
the proposed scheme has performed superior to the existing segmentation methods.
Segmentation and Classification of MRI Brain TumorIRJET Journal
This document presents a study comparing two techniques for detecting brain tumors in MRI images: level set segmentation and K-means segmentation. Features are extracted from the segmented tumors using discrete wavelet transform and gray level co-occurrence matrix. The features are then classified as benign or malignant using a support vector machine. The level set method and K-means method are evaluated based on accuracy, sensitivity, and specificity on a dataset of 41 MRI brain images. The level set method achieved slightly higher accuracy of 94.12% compared to the K-means method.
MAGNETIC RESONANCE BRAIN IMAGE SEGMENTATIONVLSICS Design
Segmentation of tissues and structures from medical images is the first step in many image analysis applications developed for medical diagnosis. With the growing research on medical image segmentation, it is essential to categorize the research outcomes and provide researchers with an overview of the existing segmentation techniques in medical images. In this paper, different image segmentation methods applied on magnetic resonance brain images are reviewed. The selection of methods includes sources from image processing journals, conferences, books, dissertations and thesis. The conceptual details of the methods are explained and mathematical details are avoided for simplicity. Both broad and detailed categorizations of reviewed segmentation techniques are provided. The state of art research is provided with emphasis on developed techniques and image properties used by them. The methods defined are not always mutually independent. Hence, their inter relationships are also stated. Finally, conclusions are drawn summarizing commonly used techniques and their complexities in application.
IRJET - Clustering Algorithm for Brain Image SegmentationIRJET Journal
The document presents a clustering algorithm for brain image segmentation using fuzzy c-means clustering. It aims to optimize the segmentation process and achieve higher accuracy rates when segmenting human MRI brain images. The fuzzy c-means algorithm is combined with rough set theory for segmentation. The algorithm segments images into homogeneous regions where adjacent regions are heterogeneous. This approach is evaluated on a set of brain images and demonstrates effectiveness as well as a comparison to other related algorithms. The goal of the algorithm is to simplify images and extract useful information for detecting brain tumors.
The document proposes a semi-supervised learning method called transfer learning with dual-task consistency (TL-DTC) for 3D left atrium segmentation from MRI scans. The method uses a V-Net backbone with a hybrid dilated convolution module. It is trained in two phases: first pre-training on sub-images, then fine-tuning on full images. Dual-task consistency is enforced between segmentation and signed distance map regression tasks to improve predictions. On the left atrium dataset, TL-DTC outperforms other semi-supervised methods with a Dice score of 90.72% using 20% labeled data.
Dilated Inception U-Net for Nuclei Segmentation in Multi-Organ Histology ImagesIRJET Journal
The document summarizes a study that used a Dilated Inception U-Net model for nuclei segmentation in histology images. Key points:
1. A Dilated Inception U-Net model was used to segment nuclei in histology images, which employs dilated convolutions to efficiently generate feature maps over a large input area.
2. The model was tested on the MoNuSeg dataset containing H&E stained images. Preprocessing included color normalization, data augmentation, and extracting 256x256 patches.
3. The Dilated Inception U-Net modifies the classic U-Net by replacing convolutional blocks with dilated inception blocks containing 1x1 and 3x3 filters with different dilation rates, allowing it to
Dual Tree Complex Wavelet Transform, Probabilistic Neural Network and Fuzzy C...IJAEMSJORNAL
The venture suggests an Adhoc technique of MRI brain image classification and image segmentation tactic. It is a programmed structure for phase classification using learning mechanism and to sense the Brain Tumor through spatial fuzzy clustering methods for bio medical applications. Automated classification and recognition of tumors in diverse MRI images is enthused for the high precision when dealing with human life. Our proposal employs a segmentation technique, Spatial Fuzzy Clustering Algorithm, for segmenting MRI images to diagnose the Brain Tumor in its earlier phase for scrutinizing the anatomical makeup. The Artificial Neural Network (ANN) will be exploited to categorize the pretentious tumor part in the brain. Dual Tree-CWT decomposition scheme is utilized for texture scrutiny of an image. Probabilistic Neural Network (PNN)-Radial Basis Function (RBF) will be engaged to execute an automated Brain Tumor classification. The preprocessing steps were operated in two phases: feature mining by means of classification via PNN-RBF network. The functioning of the classifier was assessed with the training performance and classification accuracies.
The document summarizes research on medical image segmentation algorithms. It discusses k-means clustering, fuzzy c-means clustering, and proposes enhancements to these algorithms. Specifically, it introduces an enhanced k-means algorithm that improves initial cluster center selection. It also presents a kernelized fuzzy c-means approach that maps data points into a feature space to perform clustering. The algorithms are tested on MRI brain images and evaluated based on segmentation accuracy. The enhanced methods aim to produce more precise segmentations for medical applications such as diagnosis and treatment planning.
SEGMENTATION OF MAGNETIC RESONANCE BRAIN TUMOR USING INTEGRATED FUZZY K-MEANS...ijcsit
Segmentation is a process of partitioning the image into several objects. It plays a vital role in many fields
such as satellite, remote sensing, object identification, face tracking and most importantly in medical field.
In radiology, magnetic resonance imaging (MRI) is used to investigate the human body processes and
functions of organisms. In hospitals, this technique has been using widely for medical diagnosis, to find the
disease stage and follow-up without exposure to ionizing radiation.Here in this paper, we proposed a novel
MR brain image segmentation method for detecting the tumor and finding the tumor area with improved
performance over conventional segmentation techniques such as fuzzy c means (FCM), K-means and even
that of manual segmentation in terms of precision time and accuracy. Simulation performance shows that
the proposed scheme has performed superior to the existing segmentation methods.
Segmentation and Classification of MRI Brain TumorIRJET Journal
This document presents a study comparing two techniques for detecting brain tumors in MRI images: level set segmentation and K-means segmentation. Features are extracted from the segmented tumors using discrete wavelet transform and gray level co-occurrence matrix. The features are then classified as benign or malignant using a support vector machine. The level set method and K-means method are evaluated based on accuracy, sensitivity, and specificity on a dataset of 41 MRI brain images. The level set method achieved slightly higher accuracy of 94.12% compared to the K-means method.
MAGNETIC RESONANCE BRAIN IMAGE SEGMENTATIONVLSICS Design
Segmentation of tissues and structures from medical images is the first step in many image analysis applications developed for medical diagnosis. With the growing research on medical image segmentation, it is essential to categorize the research outcomes and provide researchers with an overview of the existing segmentation techniques in medical images. In this paper, different image segmentation methods applied on magnetic resonance brain images are reviewed. The selection of methods includes sources from image processing journals, conferences, books, dissertations and thesis. The conceptual details of the methods are explained and mathematical details are avoided for simplicity. Both broad and detailed categorizations of reviewed segmentation techniques are provided. The state of art research is provided with emphasis on developed techniques and image properties used by them. The methods defined are not always mutually independent. Hence, their inter relationships are also stated. Finally, conclusions are drawn summarizing commonly used techniques and their complexities in application.
IRJET - Clustering Algorithm for Brain Image SegmentationIRJET Journal
The document presents a clustering algorithm for brain image segmentation using fuzzy c-means clustering. It aims to optimize the segmentation process and achieve higher accuracy rates when segmenting human MRI brain images. The fuzzy c-means algorithm is combined with rough set theory for segmentation. The algorithm segments images into homogeneous regions where adjacent regions are heterogeneous. This approach is evaluated on a set of brain images and demonstrates effectiveness as well as a comparison to other related algorithms. The goal of the algorithm is to simplify images and extract useful information for detecting brain tumors.
The document proposes a semi-supervised learning method called transfer learning with dual-task consistency (TL-DTC) for 3D left atrium segmentation from MRI scans. The method uses a V-Net backbone with a hybrid dilated convolution module. It is trained in two phases: first pre-training on sub-images, then fine-tuning on full images. Dual-task consistency is enforced between segmentation and signed distance map regression tasks to improve predictions. On the left atrium dataset, TL-DTC outperforms other semi-supervised methods with a Dice score of 90.72% using 20% labeled data.
This document provides a survey of various image segmentation techniques used in image processing. It begins with an introduction to image segmentation and its importance in fields like pattern recognition and medical imaging. It then categorizes and describes different segmentation approaches like edge-based, threshold-based, region-based, etc. The literature survey section summarizes several papers on specific segmentation algorithms or applications. It concludes with a table comparing the advantages and disadvantages of different segmentation techniques. The overall document aims to provide an overview of segmentation methods and their uses in computer vision.
A Survey on Image Segmentation and its Applications in Image Processing IJEEE
As technology grows day by day computer vision becomes a vital field of understanding the behavior of an image. Image segmentation is a sub field of computer vision that deals with the partition of objects into number of segments. Image segmentation found a huge application in pattern reorganization, texture analysis as well as in medial image processing. This paper focus on distinct sort of image segmentation techniques that are utilized in computer vision. Thus a survey has been created for various image segmentation techniques that describe the importance of the same. Comparison and conclusion has been created within the finish of this paper.
Effective Multi-Stage Training Model for Edge Computing Devices in Intrusion ...IJCNCJournal
Intrusion detection poses a significant challenge within expansive and persistently interconnected environments. As malicious code continues to advance and sophisticated attack methodologies proliferate, various advanced deep learning-based detection approaches have been proposed. Nevertheless, the complexity and accuracy of intrusion detection models still need further enhancement to render them more adaptable to diverse system categories, particularly within resource-constrained devices, such as those embedded in edge computing systems. This research introduces a three-stage training paradigm, augmented by an enhanced pruning methodology and model compression techniques. The objective is to elevate the system's effectiveness, concurrently maintaining a high level of accuracy for intrusion detection. Empirical assessments conducted on the UNSW-NB15 dataset evince that this solution notably reduces the model's dimensions, while upholding accuracy levels equivalent to similar proposals.
Effective Multi-Stage Training Model for Edge Computing Devices in Intrusion ...IJCNCJournal
Intrusion detection poses a significant challenge within expansive and persistently interconnected environments. As malicious code continues to advance and sophisticated attack methodologies proliferate, various advanced deep learning-based detection approaches have been proposed. Nevertheless, the complexity and accuracy of intrusion detection models still need further enhancement to render them more adaptable to diverse system categories, particularly within resource-constrained devices, such as those embedded in edge computing systems. This research introduces a three-stage training paradigm, augmented by an enhanced pruning methodology and model compression techniques. The objective is to elevate the system's effectiveness, concurrently maintaining a high level of accuracy for intrusion detection. Empirical assessments conducted on the UNSW-NB15 dataset evince that this solution notably reduces the model's dimensions, while upholding accuracy levels equivalent to similar proposals.
Geometric Correction for Braille Document Images csandit
Image processing is an important research area in computer vision. clustering is an unsupervised
study. clustering can also be used for image segmentation. there exist so many methods for image
segmentation. image segmentation plays an important role in image analysis.it is one of the first
and the most important tasks in image analysis and computer vision. this proposed system
presents a variation of fuzzy c-means algorithm that provides image clustering. the kernel fuzzy
c-means clustering algorithm (kfcm) is derived from the fuzzy c-means clustering
algorithm(fcm).the kfcm algorithm that provides image clustering and improves accuracy
significantly compared with classical fuzzy c-means algorithm. the new algorithm is called
gaussian kernel based fuzzy c-means clustering algorithm (gkfcm)the major characteristic of
gkfcm is the use of a fuzzy clustering approach ,aiming to guarantee noise insensitiveness and
image detail preservation.. the objective of the work is to cluster the low intensity in homogeneity
area from the noisy images, using the clustering method, segmenting that portion separately using
content level set approach. the purpose of designing this system is to produce better segmentation
results for images corrupted by noise, so that it can be useful in various fields like medical image
analysis, such as tumor detection, study of anatomical structure, and treatment planning.
GAUSSIAN KERNEL BASED FUZZY C-MEANS CLUSTERING ALGORITHM FOR IMAGE SEGMENTATIONcscpconf
Image processing is an important research area in computer vision. clustering is an unsupervised study. clustering can also be used for image segmentation. there exist so many methods for image segmentation. image segmentation plays an important role in image analysis.it is one of the first and the most important tasks in image analysis and computer vision. this proposed system presents a variation of fuzzy c-means algorithm that provides image clustering. the kernel fuzzy
c-means clustering algorithm (kfcm) is derived from the fuzzy c-means clustering algorithm(fcm).the kfcm algorithm that provides image clustering and improves accuracy significantly compared with classical fuzzy c-means algorithm. the new algorithm is called gaussian kernel based fuzzy c-means clustering algorithm (gkfcm)the major characteristic of gkfcm is the use of a fuzzy clustering approach ,aiming to guarantee noise insensitiveness and image detail preservation.. the objective of the work is to cluster the low intensity in homogeneity area from the noisy images, using the clustering method, segmenting that portion separately using content level set approach. the purpose of designing this system is to produce better segmentation results for images corrupted by noise, so that it can be useful in various fields like medical image analysis, such as tumor detection, study of anatomical structure, and treatment planning.
GAUSSIAN KERNEL BASED FUZZY C-MEANS CLUSTERING ALGORITHM FOR IMAGE SEGMENTATIONcsandit
Image processing is an important research area in computer vision. clustering is an unsupervised
study. clustering can also be used for image segmentation. there exist so many methods for image
segmentation. image segmentation plays an important role in image analysis.it is one of the first
and the most important tasks in image analysis and computer vision. this proposed system
presents a variation of fuzzy c-means algorithm that provides image clustering. the kernel fuzzy
c-means clustering algorithm (kfcm) is derived from the fuzzy c-means clustering
algorithm(fcm).the kfcm algorithm that provides image clustering and improves accuracy
significantly compared with classical fuzzy c-means algorithm. the new algorithm is called
gaussian kernel based fuzzy c-means clustering algorithm (gkfcm)the major characteristic of
gkfcm is the use of a fuzzy clustering approach ,aiming to guarantee noise insensitiveness and
image detail preservation.. the objective of the work is to cluster the low intensity in homogeneity
area from the noisy images, using the clustering method, segmenting that portion separately using
content level set approach. the purpose of designing this system is to produce better segmentation
results for images corrupted by noise, so that it can be useful in various fields like medical image
analysis, such as tumor detection, study of anatomical structure, and treatment planning.
Brain Tumor Detection using Clustering Algorithms in MRI ImagesIRJET Journal
This document presents a novel brain tumor detection system using k-means clustering integrated with fuzzy c-means clustering and artificial neural networks. The system takes advantage of both algorithms for minimal computation time and accuracy. It accurately extracts the tumor region and calculates the tumor area by comparing the results to ground truths of the MRI images. K-means performs initial segmentation, then fuzzy c-means locates the approximate segmented tumor based on membership and cluster selection criteria. Features are extracted and an artificial neural network classifies MRI images as normal or containing a tumor. The system achieves high accuracy, sensitivity and specificity when validated against ground truths.
Brain tumor segmentation using asymmetry based histogram thresholding and k m...eSAT Publishing House
This document presents a method for segmenting brain tumors from MRI images using asymmetry-based histogram thresholding and k-means clustering. The method involves 8 steps: 1) preprocessing the MRI image using sharpening and median filters, 2) computing histograms of the left and right halves of the image, 3) calculating a threshold value using the difference between left and right histograms, 4) applying thresholding and morphological operations to extract the tumor region, 5) applying k-means clustering and using the cluster centroids to refine the segmentation. The method is tested on 30 MRI images and results show the tumor region is accurately segmented. The segmented tumors can then be used for quantification, classification, and computer-assisted diagnosis of brain tumors.
Brain tumor visualization for magnetic resonance images using modified shape...IJECEIAES
3D visualization plays an essential role in medical diagnosis and setting treatment plans especially for brain cancer. There have been many attempts for brain tumor reconstruction and visualization using various techniques. However, this problem is still considered unsolved as more accurate results are needed in this critical field. In this paper, a sequence of 2D slices of brain magnetic resonance Images was used to reconstruct a 3D model for the brain tumor. The images were automatically segmented using wavelet multi-resolution expectation maximization algorithm. Then, the inter-slice gaps were interpolated using the proposed modified shape-based interpolation method. The method involves three main steps; transferring the binary tumor images to distance images using a suitable distance function, interpolating the distance images using cubic spline interpolation and thresholding the interpolated values to get the reconstructed slices. The final tumor is then visualized as a 3D isosurface. We evaluated the proposed method by removing an original slice from the input images and interpolating it, the results outperform the original shape-based interpolation method by an average of 3% reaching 99% of accuracy for some slice images.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
AN EFFICIENT M-ARY QIM DATA HIDING ALGORITHM FOR THE APPLICATION TO IMAGE ERR...IJNSA Journal
Methods like edge directed interpolation and projection onto convex sets (POCS) that are widely used for image error concealment to produce better image quality are complex in nature and also time consuming. Moreover, those methods are not suitable for real time error concealment where the decoder may not have sufficient computation power or done in online. In this paper, we propose a data-hiding scheme for error concealment of digital image. Edge direction information of a block is extracted in the encoder and is embedded imperceptibly into the host media using quantization index modulation (QIM), thus reduces work load of the decoder. The system performance in term of fidelity and computational load is improved using M-ary data modulation based on near-orthogonal QIM. The decoder extracts the embedded
features (edge information) and those features are then used for recovery of lost data. Experimental results duly support the effectiveness of the proposed scheme.
Fuzzy k c-means clustering algorithm for medical imageAlexander Decker
This document summarizes and compares several algorithms used for medical image segmentation, including thresholding, classifiers, Markov random field models, artificial neural networks, atlas-guided approaches, deformable models, and clustering analysis methods like k-means and fuzzy c-means. It provides details on the fuzzy c-means and k-means clustering algorithms, including their process and flowcharts. A new fuzzy k-c-means algorithm is proposed that combines fuzzy c-means and k-means clustering to improve segmentation time. The algorithms are tested on MRI brain images and their results are analyzed and compared based on time, iterations, and accuracy.
A deep learning based stereo matching model for autonomous vehicleIAESIJAI
Autonomous vehicle is one the prominent area of research in computer
vision. In today’s AI world, the concept of autonomous vehicles has become
popular largely to avoid accidents due to negligence of driver. Perceiving the
depth of the surrounding region accurately is a challenging task in
autonomous vehicles. Sensors like light detection and ranging can be used
for depth estimation but these sensors are expensive. Hence stereo matching
is an alternate solution to estimate the depth. The main difficulties observed
in stereo matching is to minimize mismatches in the ill-posed regions, like
occluded, texture less and discontinuous regions. This paper presents an
efficient deep stereo matching technique for estimating disparity map from
stereo images in ill-posed regions. The images from Middlebury stereo data
set are used to assess the efficacy of the model proposed. The experimental
outcome dipicts that the proposed model generates reliable results in the
occluded, texture less and discontinuous regions as compared to the existing
techniques.
Brain Tumor Detection and Classification Using MRI Brain ImagesIRJET Journal
This document presents research on detecting and classifying brain tumors using MRI images. It discusses:
1) Using k-means clustering for pre-processing MRI images to reduce noise and increase detection accuracy. Marker-controlled watershed transformation and grey-level co-occurrence matrix are then used for tumor detection and feature extraction.
2) Two classification methods are employed: support vector machine (SVM) and artificial neural network (ANN). SVM and ANN have been shown to accurately classify tumors in an effective manner.
3) The paper proposes an algorithm to differentiate between benign and malignant tumors using watershed segmentation and extracting grey-level co-occurrence matrix features from MRI images, which are then classified using SVM and AN
DIRECTIONAL CLASSIFICATION OF BRAIN TUMOR IMAGES FROM MRI USING CNN-BASED DEE...IRJET Journal
This document presents research on using a convolutional neural network (CNN) model for the detection and classification of brain tumors from MRI images. The CNN model improves the accuracy of tumor detection and can serve as a useful tool for physicians. The researchers trained and tested several CNN architectures, including CNN, ResNet50, MobileNetV2, and VGG19 on an MRI brain image database. Their proposed model uses a modified Residual U-Net architecture with residual blocks and attention gates to better segment tumors and extract local features from MRI images. Evaluation results found their model achieved better accuracy than existing methods like U-Net and CNN for brain tumor segmentation tasks.
Review paper on segmentation methods for multiobject feature extractioneSAT Journals
Abstract Feature extraction and representation plays a vital role in multimedia processing. It is still a challenge in computer vision system to extract ideal features that represents intrinsic characteristics of an image. Multiobject feature extraction system means a system that can extract features and locations of multiple objects in an image. In this paper we have discuss various methods to extract location and features of multiple objects and describe a system that can extract locations and features of multiple objects in an image by implementing an algorithm as hardware logic on a field-programmable gate array-based platform. There are many multiobject extraction methods which can be use for image segmentation based on motion, color intensity and texture. By calculating zeroth and first order moments of objects it is possible to obtain locations and sizes of multiple objects in an image. Keywords: multiobject extraction, image segmentation
Illustration of Medical Image Segmentation based on Clustering Algorithmsrahulmonikasharma
Image segmentation is the most basic and crucial process remembering the true objective to facilitate the characterization and representation of the structure of excitement for medical or basic images. Despite escalated research, segmentation remains a challenging issue because of the differing image content, cluttered objects, occlusion, non-uniform object surface, and different factors. There are numerous calculations and techniques accessible for image segmentation yet at the same time there requirements to build up an efficient, quick technique of medical image segmentation. This paper has focused on K-means and Fuzzy C means clustering algorithm to segment malaria blood samples in more accurate manner.
This document provides an overview of medical image segmentation using deep learning techniques. It discusses several deep learning architectures used for medical image segmentation, including U-Net, V-Net, GoogleNet, and ResNet. U-Net uses a symmetric encoder-decoder structure with skip connections to efficiently segment biomedical images. V-Net directly processes 3D MRI volumes for prostate segmentation. GoogleNet and ResNet employ inception modules and residual connections, respectively, to reduce parameters and enable training of very deep networks for medical image analysis tasks. The document aims to classify medical image segmentation approaches, discuss challenges, and outline future research directions using deep learning.
Supervised Blood Vessel Segmentation in Retinal Images Using Gray level and M...IJTET Journal
The segmentation of membranel blood vessels within the retina may be a essential step in designation of diabetic retinopathy during this paper, gift a replacement methodology for mechanically segmenting blood vessels in retinal pictures. 2 techniques for segmenting retinal blood vessels, supported totally different image process techniques, square measure represented and their strengths and weaknesses square measure compared. This methodology uses a neural network (NN) theme for element classification and gray-level and moment invariants-based options for element illustration. The performance of every algorithmic program was tested on the STARE and DRIVE dataset. wide used for this purpose, since they contain retinal pictures and also the
vascular structures. Performance on each sets of check pictures is healthier than different existing pictures. The methodology
proves particularly correct for vessel detection in STARE pictures. This effectiveness and lustiness with totally different image conditions, is employed for simplicity and quick implementation. This methodology used for early detection of Diabetic Retinopathy (DR)
Because of the rapid growth in technology breakthroughs, including
multimedia and cell phones, Telugu character recognition (TCR) has recently
become a popular study area. It is still necessary to construct automated and
intelligent online TCR models, even if many studies have focused on offline
TCR models. The Telugu character dataset construction and validation using
an Inception and ResNet-based model are presented. The collection of 645
letters in the dataset includes 18 Achus, 38 Hallus, 35 Othulu, 34×16
Guninthamulu, and 10 Ankelu. The proposed technique aims to efficiently
recognize and identify distinctive Telugu characters online. This model's main
pre-processing steps to achieve its goals include normalization, smoothing,
and interpolation. Improved recognition performance can be attained by using
stochastic gradient descent (SGD) to optimize the model's hyperparameters.
Scientific workload execution on a distributed computing platform such as a
cloud environment is time-consuming and expensive. The scientific workload
has task dependencies with different service level agreement (SLA)
prerequisites at different levels. Existing workload scheduling (WS) designs
are not efficient in assuring SLA at the task level. Alongside, induces higher
costs as the majority of scheduling mechanisms reduce either time or energy.
In reducing, cost both energy and makespan must be optimized together for
allocating resources. No prior work has considered optimizing energy and
processing time together in meeting task level SLA requirements. This paper
presents task level energy and performance assurance-workload scheduling
(TLEPA-WS) algorithm for the distributed computing environment. The
TLEPA-WS guarantees energy minimization with the performance
requirement of the parallel application under a distributed computational
environment. Experiment results show a significant reduction in using energy
and makespan; thereby reducing the cost of workload execution in comparison
with various standard workload execution models.
More Related Content
Similar to Custom administering attention module for segmentation of magnetic resonance imaging of the brain
This document provides a survey of various image segmentation techniques used in image processing. It begins with an introduction to image segmentation and its importance in fields like pattern recognition and medical imaging. It then categorizes and describes different segmentation approaches like edge-based, threshold-based, region-based, etc. The literature survey section summarizes several papers on specific segmentation algorithms or applications. It concludes with a table comparing the advantages and disadvantages of different segmentation techniques. The overall document aims to provide an overview of segmentation methods and their uses in computer vision.
A Survey on Image Segmentation and its Applications in Image Processing IJEEE
As technology grows day by day computer vision becomes a vital field of understanding the behavior of an image. Image segmentation is a sub field of computer vision that deals with the partition of objects into number of segments. Image segmentation found a huge application in pattern reorganization, texture analysis as well as in medial image processing. This paper focus on distinct sort of image segmentation techniques that are utilized in computer vision. Thus a survey has been created for various image segmentation techniques that describe the importance of the same. Comparison and conclusion has been created within the finish of this paper.
Effective Multi-Stage Training Model for Edge Computing Devices in Intrusion ...IJCNCJournal
Intrusion detection poses a significant challenge within expansive and persistently interconnected environments. As malicious code continues to advance and sophisticated attack methodologies proliferate, various advanced deep learning-based detection approaches have been proposed. Nevertheless, the complexity and accuracy of intrusion detection models still need further enhancement to render them more adaptable to diverse system categories, particularly within resource-constrained devices, such as those embedded in edge computing systems. This research introduces a three-stage training paradigm, augmented by an enhanced pruning methodology and model compression techniques. The objective is to elevate the system's effectiveness, concurrently maintaining a high level of accuracy for intrusion detection. Empirical assessments conducted on the UNSW-NB15 dataset evince that this solution notably reduces the model's dimensions, while upholding accuracy levels equivalent to similar proposals.
Effective Multi-Stage Training Model for Edge Computing Devices in Intrusion ...IJCNCJournal
Intrusion detection poses a significant challenge within expansive and persistently interconnected environments. As malicious code continues to advance and sophisticated attack methodologies proliferate, various advanced deep learning-based detection approaches have been proposed. Nevertheless, the complexity and accuracy of intrusion detection models still need further enhancement to render them more adaptable to diverse system categories, particularly within resource-constrained devices, such as those embedded in edge computing systems. This research introduces a three-stage training paradigm, augmented by an enhanced pruning methodology and model compression techniques. The objective is to elevate the system's effectiveness, concurrently maintaining a high level of accuracy for intrusion detection. Empirical assessments conducted on the UNSW-NB15 dataset evince that this solution notably reduces the model's dimensions, while upholding accuracy levels equivalent to similar proposals.
Geometric Correction for Braille Document Images csandit
Image processing is an important research area in computer vision. clustering is an unsupervised
study. clustering can also be used for image segmentation. there exist so many methods for image
segmentation. image segmentation plays an important role in image analysis.it is one of the first
and the most important tasks in image analysis and computer vision. this proposed system
presents a variation of fuzzy c-means algorithm that provides image clustering. the kernel fuzzy
c-means clustering algorithm (kfcm) is derived from the fuzzy c-means clustering
algorithm(fcm).the kfcm algorithm that provides image clustering and improves accuracy
significantly compared with classical fuzzy c-means algorithm. the new algorithm is called
gaussian kernel based fuzzy c-means clustering algorithm (gkfcm)the major characteristic of
gkfcm is the use of a fuzzy clustering approach ,aiming to guarantee noise insensitiveness and
image detail preservation.. the objective of the work is to cluster the low intensity in homogeneity
area from the noisy images, using the clustering method, segmenting that portion separately using
content level set approach. the purpose of designing this system is to produce better segmentation
results for images corrupted by noise, so that it can be useful in various fields like medical image
analysis, such as tumor detection, study of anatomical structure, and treatment planning.
GAUSSIAN KERNEL BASED FUZZY C-MEANS CLUSTERING ALGORITHM FOR IMAGE SEGMENTATIONcscpconf
Image processing is an important research area in computer vision. clustering is an unsupervised study. clustering can also be used for image segmentation. there exist so many methods for image segmentation. image segmentation plays an important role in image analysis.it is one of the first and the most important tasks in image analysis and computer vision. this proposed system presents a variation of fuzzy c-means algorithm that provides image clustering. the kernel fuzzy
c-means clustering algorithm (kfcm) is derived from the fuzzy c-means clustering algorithm(fcm).the kfcm algorithm that provides image clustering and improves accuracy significantly compared with classical fuzzy c-means algorithm. the new algorithm is called gaussian kernel based fuzzy c-means clustering algorithm (gkfcm)the major characteristic of gkfcm is the use of a fuzzy clustering approach ,aiming to guarantee noise insensitiveness and image detail preservation.. the objective of the work is to cluster the low intensity in homogeneity area from the noisy images, using the clustering method, segmenting that portion separately using content level set approach. the purpose of designing this system is to produce better segmentation results for images corrupted by noise, so that it can be useful in various fields like medical image analysis, such as tumor detection, study of anatomical structure, and treatment planning.
GAUSSIAN KERNEL BASED FUZZY C-MEANS CLUSTERING ALGORITHM FOR IMAGE SEGMENTATIONcsandit
Image processing is an important research area in computer vision. clustering is an unsupervised
study. clustering can also be used for image segmentation. there exist so many methods for image
segmentation. image segmentation plays an important role in image analysis.it is one of the first
and the most important tasks in image analysis and computer vision. this proposed system
presents a variation of fuzzy c-means algorithm that provides image clustering. the kernel fuzzy
c-means clustering algorithm (kfcm) is derived from the fuzzy c-means clustering
algorithm(fcm).the kfcm algorithm that provides image clustering and improves accuracy
significantly compared with classical fuzzy c-means algorithm. the new algorithm is called
gaussian kernel based fuzzy c-means clustering algorithm (gkfcm)the major characteristic of
gkfcm is the use of a fuzzy clustering approach ,aiming to guarantee noise insensitiveness and
image detail preservation.. the objective of the work is to cluster the low intensity in homogeneity
area from the noisy images, using the clustering method, segmenting that portion separately using
content level set approach. the purpose of designing this system is to produce better segmentation
results for images corrupted by noise, so that it can be useful in various fields like medical image
analysis, such as tumor detection, study of anatomical structure, and treatment planning.
Brain Tumor Detection using Clustering Algorithms in MRI ImagesIRJET Journal
This document presents a novel brain tumor detection system using k-means clustering integrated with fuzzy c-means clustering and artificial neural networks. The system takes advantage of both algorithms for minimal computation time and accuracy. It accurately extracts the tumor region and calculates the tumor area by comparing the results to ground truths of the MRI images. K-means performs initial segmentation, then fuzzy c-means locates the approximate segmented tumor based on membership and cluster selection criteria. Features are extracted and an artificial neural network classifies MRI images as normal or containing a tumor. The system achieves high accuracy, sensitivity and specificity when validated against ground truths.
Brain tumor segmentation using asymmetry based histogram thresholding and k m...eSAT Publishing House
This document presents a method for segmenting brain tumors from MRI images using asymmetry-based histogram thresholding and k-means clustering. The method involves 8 steps: 1) preprocessing the MRI image using sharpening and median filters, 2) computing histograms of the left and right halves of the image, 3) calculating a threshold value using the difference between left and right histograms, 4) applying thresholding and morphological operations to extract the tumor region, 5) applying k-means clustering and using the cluster centroids to refine the segmentation. The method is tested on 30 MRI images and results show the tumor region is accurately segmented. The segmented tumors can then be used for quantification, classification, and computer-assisted diagnosis of brain tumors.
Brain tumor visualization for magnetic resonance images using modified shape...IJECEIAES
3D visualization plays an essential role in medical diagnosis and setting treatment plans especially for brain cancer. There have been many attempts for brain tumor reconstruction and visualization using various techniques. However, this problem is still considered unsolved as more accurate results are needed in this critical field. In this paper, a sequence of 2D slices of brain magnetic resonance Images was used to reconstruct a 3D model for the brain tumor. The images were automatically segmented using wavelet multi-resolution expectation maximization algorithm. Then, the inter-slice gaps were interpolated using the proposed modified shape-based interpolation method. The method involves three main steps; transferring the binary tumor images to distance images using a suitable distance function, interpolating the distance images using cubic spline interpolation and thresholding the interpolated values to get the reconstructed slices. The final tumor is then visualized as a 3D isosurface. We evaluated the proposed method by removing an original slice from the input images and interpolating it, the results outperform the original shape-based interpolation method by an average of 3% reaching 99% of accuracy for some slice images.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
AN EFFICIENT M-ARY QIM DATA HIDING ALGORITHM FOR THE APPLICATION TO IMAGE ERR...IJNSA Journal
Methods like edge directed interpolation and projection onto convex sets (POCS) that are widely used for image error concealment to produce better image quality are complex in nature and also time consuming. Moreover, those methods are not suitable for real time error concealment where the decoder may not have sufficient computation power or done in online. In this paper, we propose a data-hiding scheme for error concealment of digital image. Edge direction information of a block is extracted in the encoder and is embedded imperceptibly into the host media using quantization index modulation (QIM), thus reduces work load of the decoder. The system performance in term of fidelity and computational load is improved using M-ary data modulation based on near-orthogonal QIM. The decoder extracts the embedded
features (edge information) and those features are then used for recovery of lost data. Experimental results duly support the effectiveness of the proposed scheme.
Fuzzy k c-means clustering algorithm for medical imageAlexander Decker
This document summarizes and compares several algorithms used for medical image segmentation, including thresholding, classifiers, Markov random field models, artificial neural networks, atlas-guided approaches, deformable models, and clustering analysis methods like k-means and fuzzy c-means. It provides details on the fuzzy c-means and k-means clustering algorithms, including their process and flowcharts. A new fuzzy k-c-means algorithm is proposed that combines fuzzy c-means and k-means clustering to improve segmentation time. The algorithms are tested on MRI brain images and their results are analyzed and compared based on time, iterations, and accuracy.
A deep learning based stereo matching model for autonomous vehicleIAESIJAI
Autonomous vehicle is one the prominent area of research in computer
vision. In today’s AI world, the concept of autonomous vehicles has become
popular largely to avoid accidents due to negligence of driver. Perceiving the
depth of the surrounding region accurately is a challenging task in
autonomous vehicles. Sensors like light detection and ranging can be used
for depth estimation but these sensors are expensive. Hence stereo matching
is an alternate solution to estimate the depth. The main difficulties observed
in stereo matching is to minimize mismatches in the ill-posed regions, like
occluded, texture less and discontinuous regions. This paper presents an
efficient deep stereo matching technique for estimating disparity map from
stereo images in ill-posed regions. The images from Middlebury stereo data
set are used to assess the efficacy of the model proposed. The experimental
outcome dipicts that the proposed model generates reliable results in the
occluded, texture less and discontinuous regions as compared to the existing
techniques.
Brain Tumor Detection and Classification Using MRI Brain ImagesIRJET Journal
This document presents research on detecting and classifying brain tumors using MRI images. It discusses:
1) Using k-means clustering for pre-processing MRI images to reduce noise and increase detection accuracy. Marker-controlled watershed transformation and grey-level co-occurrence matrix are then used for tumor detection and feature extraction.
2) Two classification methods are employed: support vector machine (SVM) and artificial neural network (ANN). SVM and ANN have been shown to accurately classify tumors in an effective manner.
3) The paper proposes an algorithm to differentiate between benign and malignant tumors using watershed segmentation and extracting grey-level co-occurrence matrix features from MRI images, which are then classified using SVM and AN
DIRECTIONAL CLASSIFICATION OF BRAIN TUMOR IMAGES FROM MRI USING CNN-BASED DEE...IRJET Journal
This document presents research on using a convolutional neural network (CNN) model for the detection and classification of brain tumors from MRI images. The CNN model improves the accuracy of tumor detection and can serve as a useful tool for physicians. The researchers trained and tested several CNN architectures, including CNN, ResNet50, MobileNetV2, and VGG19 on an MRI brain image database. Their proposed model uses a modified Residual U-Net architecture with residual blocks and attention gates to better segment tumors and extract local features from MRI images. Evaluation results found their model achieved better accuracy than existing methods like U-Net and CNN for brain tumor segmentation tasks.
Review paper on segmentation methods for multiobject feature extractioneSAT Journals
Abstract Feature extraction and representation plays a vital role in multimedia processing. It is still a challenge in computer vision system to extract ideal features that represents intrinsic characteristics of an image. Multiobject feature extraction system means a system that can extract features and locations of multiple objects in an image. In this paper we have discuss various methods to extract location and features of multiple objects and describe a system that can extract locations and features of multiple objects in an image by implementing an algorithm as hardware logic on a field-programmable gate array-based platform. There are many multiobject extraction methods which can be use for image segmentation based on motion, color intensity and texture. By calculating zeroth and first order moments of objects it is possible to obtain locations and sizes of multiple objects in an image. Keywords: multiobject extraction, image segmentation
Illustration of Medical Image Segmentation based on Clustering Algorithmsrahulmonikasharma
Image segmentation is the most basic and crucial process remembering the true objective to facilitate the characterization and representation of the structure of excitement for medical or basic images. Despite escalated research, segmentation remains a challenging issue because of the differing image content, cluttered objects, occlusion, non-uniform object surface, and different factors. There are numerous calculations and techniques accessible for image segmentation yet at the same time there requirements to build up an efficient, quick technique of medical image segmentation. This paper has focused on K-means and Fuzzy C means clustering algorithm to segment malaria blood samples in more accurate manner.
This document provides an overview of medical image segmentation using deep learning techniques. It discusses several deep learning architectures used for medical image segmentation, including U-Net, V-Net, GoogleNet, and ResNet. U-Net uses a symmetric encoder-decoder structure with skip connections to efficiently segment biomedical images. V-Net directly processes 3D MRI volumes for prostate segmentation. GoogleNet and ResNet employ inception modules and residual connections, respectively, to reduce parameters and enable training of very deep networks for medical image analysis tasks. The document aims to classify medical image segmentation approaches, discuss challenges, and outline future research directions using deep learning.
Supervised Blood Vessel Segmentation in Retinal Images Using Gray level and M...IJTET Journal
The segmentation of membranel blood vessels within the retina may be a essential step in designation of diabetic retinopathy during this paper, gift a replacement methodology for mechanically segmenting blood vessels in retinal pictures. 2 techniques for segmenting retinal blood vessels, supported totally different image process techniques, square measure represented and their strengths and weaknesses square measure compared. This methodology uses a neural network (NN) theme for element classification and gray-level and moment invariants-based options for element illustration. The performance of every algorithmic program was tested on the STARE and DRIVE dataset. wide used for this purpose, since they contain retinal pictures and also the
vascular structures. Performance on each sets of check pictures is healthier than different existing pictures. The methodology
proves particularly correct for vessel detection in STARE pictures. This effectiveness and lustiness with totally different image conditions, is employed for simplicity and quick implementation. This methodology used for early detection of Diabetic Retinopathy (DR)
Similar to Custom administering attention module for segmentation of magnetic resonance imaging of the brain (20)
Because of the rapid growth in technology breakthroughs, including
multimedia and cell phones, Telugu character recognition (TCR) has recently
become a popular study area. It is still necessary to construct automated and
intelligent online TCR models, even if many studies have focused on offline
TCR models. The Telugu character dataset construction and validation using
an Inception and ResNet-based model are presented. The collection of 645
letters in the dataset includes 18 Achus, 38 Hallus, 35 Othulu, 34×16
Guninthamulu, and 10 Ankelu. The proposed technique aims to efficiently
recognize and identify distinctive Telugu characters online. This model's main
pre-processing steps to achieve its goals include normalization, smoothing,
and interpolation. Improved recognition performance can be attained by using
stochastic gradient descent (SGD) to optimize the model's hyperparameters.
Scientific workload execution on a distributed computing platform such as a
cloud environment is time-consuming and expensive. The scientific workload
has task dependencies with different service level agreement (SLA)
prerequisites at different levels. Existing workload scheduling (WS) designs
are not efficient in assuring SLA at the task level. Alongside, induces higher
costs as the majority of scheduling mechanisms reduce either time or energy.
In reducing, cost both energy and makespan must be optimized together for
allocating resources. No prior work has considered optimizing energy and
processing time together in meeting task level SLA requirements. This paper
presents task level energy and performance assurance-workload scheduling
(TLEPA-WS) algorithm for the distributed computing environment. The
TLEPA-WS guarantees energy minimization with the performance
requirement of the parallel application under a distributed computational
environment. Experiment results show a significant reduction in using energy
and makespan; thereby reducing the cost of workload execution in comparison
with various standard workload execution models.
Investigating human subjects is the goal of predicting human emotions in the
real world scenario. A significant number of psychological effects require
(feelings) to be produced, directly releasing human emotions. The
development of effect theory leads one to believe that one must be aware of
one's sentiments and emotions to forecast one's behavior. The proposed line
of inquiry focuses on developing a reliable model incorporating
neurophysiological data into actual feelings. Any change in emotional affect
will directly elicit a response in the body's physiological systems. This
approach is named after the notion of Gaussian mixture models (GMM). The
statistical reaction following data processing, quantitative findings on emotion
labels, and coincidental responses with training samples all directly impact the
outcomes that are accomplished. In terms of statistical parameters such as
population mean and standard deviation, the suggested method is evaluated
compared to a technique considered to be state-of-the-art. The proposed
system determines an individual's emotional state after a minimum of 6
iterative learning using the Gaussian expectation-maximization (GEM)
statistical model, in which the iterations tend to continue to zero error. Perhaps
each of these improves predictions while simultaneously increasing the
amount of value extracted.
Early diagnosis of cancers is a major requirement for patients and a
complicated job for the oncologist. If it is diagnosed early, it could have made
the patient more likely to live. For a few decades, fuzzy logic emerged as an
emphatic technique in the identification of diseases like different types of
cancers. The recognition of cancer diseases mostly operated with inexactness,
inaccuracy, and vagueness. This paper aims to design the fuzzy expert system
(FES) and its implementation for the detection of prostate cancer. Specifically,
prostate-specific antigen (PSA), prostate volume (PV), age, and percentage
free PSA (%FPSA) are used to determine prostate cancer risk (PCR), while
PCR serves as an output parameter. Mamdani fuzzy inference method is used
to calculate a range of PCR. The system provides a scale of risk of prostate
cancer and clears the path for the oncologist to determine whether their
patients need a biopsy. This system is fast as it requires minimum calculation
and hence comparatively less time which reduces mortality and morbidity and
is more reliable than other economic systems and can be frequently used by
doctors.
The biomedical profession has gained importance due to the rapid and accurate diagnosis of clinical patients using computer-aided diagnosis (CAD) tools.
The diagnosis and treatment of Alzheimer’s disease (AD) using complementary multimodalities can improve the quality of life and mental state of patients.
In this study, we integrated a lightweight custom convolutional neural network
(CNN) model and nature-inspired optimization techniques to enhance the performance, robustness, and stability of progress detection in AD. A multi-modal
fusion database approach was implemented, including positron emission tomography (PET) and magnetic resonance imaging (MRI) datasets, to create a fused
database. We compared the performance of custom and pre-trained deep learning models with and without optimization and found that employing natureinspired algorithms like the particle swarm optimization algorithm (PSO) algorithm significantly improved system performance. The proposed methodology,
which includes a fused multimodality database and optimization strategy, improved performance metrics such as training, validation, test accuracy, precision, and recall. Furthermore, PSO was found to improve the performance of
pre-trained models by 3-5% and custom models by up to 22%. Combining different medical imaging modalities improved the overall model performance by
2-5%. In conclusion, a customized lightweight CNN model and nature-inspired
optimization techniques can significantly enhance progress detection, leading to
better biomedical research and patient care.
Class imbalance is a pervasive issue in the field of disease classification from
medical images. It is necessary to balance out the class distribution while training a model. However, in the case of rare medical diseases, images from affected
patients are much harder to come by compared to images from non-affected
patients, resulting in unwanted class imbalance. Various processes of tackling
class imbalance issues have been explored so far, each having its fair share of
drawbacks. In this research, we propose an outlier detection based image classification technique which can handle even the most extreme case of class imbalance. We have utilized a dataset of malaria parasitized and uninfected cells. An
autoencoder model titled AnoMalNet is trained with only the uninfected cell images at the beginning and then used to classify both the affected and non-affected
cell images by thresholding a loss value. We have achieved an accuracy, precision, recall, and F1 score of 98.49%, 97.07%, 100%, and 98.52% respectively,
performing better than large deep learning models and other published works.
As our proposed approach can provide competitive results without needing the
disease-positive samples during training, it should prove to be useful in binary
disease classification on imbalanced datasets.
Recently, plant identification has become an active trend due to encouraging
results achieved in plant species detection and plant classification fields
among numerous available plants using deep learning methods. Therefore,
plant classification analysis is performed in this work to address the problem
of accurate plant species detection in the presence of multiple leaves together,
flowers, and noise. Thus, a convolutional neural network based deep feature
learning and classification (CNN-DFLC) model is designed to analyze
patterns of plant leaves and perform classification using generated finegrained feature weights. The proposed CNN-DFLC model precisely estimates
which the given image belongs to which plant species. Several layers and
blocks are utilized to design the proposed CNN-DFLC model. Fine-grained
feature weights are obtained using convolutional and pooling layers. The
obtained feature maps in training are utilized to predict labels and model
performance is tested on the Vietnam plant image (VPN-200) dataset. This
dataset consists of a total number of 20,000 images and testing results are
achieved in terms of classification accuracy, precision, recall, and other
performance metrics. The mean classification accuracy obtained using the
proposed CNN-DFLC model is 96.42% considering all 200 classes from the
VPN-200 dataset.
Big data as a service (BDaaS) platform is widely used by various
organizations for handling and processing the high volume of data generated
from different internet of things (IoT) devices. Data generated from these IoT
devices are kept in the form of big data with the help of cloud computing
technology. Researchers are putting efforts into providing a more secure and
protected access environment for the data available on the cloud. In order to
create a safe, distributed, and decentralised environment in the cloud,
blockchain technology has emerged as a useful tool. In this research paper, we
have proposed a system that uses blockchain technology as a tool to regulate
data access that is provided by BDaaS platforms. We are securing the access
policy of data by using a modified form of ciphertext policy-attribute based
encryption (CP-ABE) technique with the help of blockchain technology. For
secure data access in BDaaS, algorithms have been created using a mix of CPABE with blockchain technology. Proposed smart contract algorithms are
implemented using Eclipse 7.0 IDE and the cloud environment has been
simulated on CloudSim tool. Results of key generation time, encryption time,
and decryption time has been calculated and compared with access control
mechanism without blockchain technology.
Internet of things (IoT) has become one of the eminent phenomena in human
life along with its collaboration with wireless sensor networks (WSNs), due
to enormous growth in the domain; there has been a demand to address the
various issues regarding it such as energy consumption, redundancy, and
overhead. Data aggregation (DA) is considered as the basic mechanism to
minimize the energy efficiency and communication overhead; however,
security plays an important role where node security is essential due to the
volatile nature of WSN. Thus, we design and develop proximate node aware
secure data aggregation (PNA-SDA). In the PNA-SDA mechanism, additional
data is used to secure the original data, and further information is shared with
the proximate node; moreover, further security is achieved by updating the
state each time. Moreover, the node that does not have updated information is
considered as the compromised node and discarded. PNA-SDA is evaluated
considering the different parameters like average energy consumption, and
average deceased node; also, comparative analysis is carried out with the
existing model in terms of throughput and correct packet identification.
Drones provide an alternative progression in protection submissions since
they are capable of conducting autonomous seismic investigations. Recent
advancement in unmanned aerial vehicle (UAV) communication is an internet
of a drone combined with 5G networks. Because of the quick utilization of
rapidly progressed registering frameworks besides 5G officialdoms, the
information from the user is consistently refreshed and pooled. Thus, safety
or confidentiality is vital among clients, and a proficient substantiation
methodology utilizing a vigorous sanctuary key. Conventional procedures
ensure a few restrictions however taking care of the assault arrangements in
information transmission over the internet of drones (IOD) environmental
frameworks. A unique hyperelliptical curve (HEC) cryptographically based
validation system is proposed to provide protected data facilities among
drones. The proposed method has been compared with the existing methods
in terms of packet loss rate, computational cost, and delay and thereby
provides better insight into efficient and secure communication. Finally, the
simulation results show that our strategy is efficient in both computation and
communication.
Monitoring behavior, numerous actions, or any such information is considered
as surveillance and is done for information gathering, influencing, managing,
or directing purposes. Citizens employ surveillance to safeguard their
communities. Governments do this for the purposes of intelligence collection,
including espionage, crime prevention, the defense of a method, a person, a
group, or an item; or the investigation of criminal activity. Using an internet
of things (IoT) rover, the area will be secured with better secrecy and
efficiency instead of humans, will provide an additional safety step. In this
paper, there is a discussion about an IoT rover for remote surveillance based
around a Raspberry Pi microprocessor which will be able to monitor a
closed/open space. This rover will allow safer survey operations and would
help to reduce the risks involved with it.
In a world where climate change looms large the spotlight often shines on
greenhouse gases, but the shadow of man-made aerosols should not be
underestimated. These tiny particles play a pivotal role in disrupting Earth's
radiative equilibrium, yet many mysteries surround their influence on various
physical aspects of our planet. The root of these mysteries lies in the limited
data we have on aerosol sources, formation processes, conversion dynamics,
and collection methods. Aerosols, composed of particulate matter (PM),
sulfates, and nitrates, hold significant sway across the hemisphere. Accurate
measurement demands the refinement of in-situ, satellite, and ground-based
techniques. As aerosols interact intricately with the environment, their full
impact remains an enigma. Enter a groundbreaking study in Morocco that
dared to compare an internet of thing (IoT) system with satellite-based
atmospheric models, with a focus on fine particles below 10 and 2.5
micrometers in diameter. The initial results, particularly in regions abundant
with extraction pits, shed light on the IoT system's potential to decode
aerosols' role in the grand narrative of climate change. These findings inspire
hope as we confront the formidable global challenge of climate change.
The use of technology has a significant impact to reduce the consequences of
accidents. Sensors, small components that detect interactions experienced by
various components, play a crucial role in this regard. This study focuses on
how the MPU6050 sensor module can be used to detect the movement of
people who are falling, defined as the inability of the lower body, including
the hips and feet, to support the body effectively. An airbag system is
proposed to reduce the impact of a fall. The data processing method in this
study involves the use of a threshold value to identify falling motion. The
results of the study have identified a threshold value for falling motion,
including an acceleration relative (AR) value of less than or equal to 0.38 g,
an angle slope of more than or equal to 40 degrees, and an angular velocity
of more than or equal to 30 °/s. The airbag system is designed to inflate
faster than the time of impact, with a gas flow rate of 0.04876 m3
/s and an
inflating time of 0.05 s. The overall system has a specificity value of 100%,
a sensitivity of 85%, and an accuracy of 94%.
The fundamental principle of the paper is that the soil moisture sensor obtains
the moisture content level of the soil sample. The water pump is automatically
activated if the moisture content is insufficient, which causes water to flow
into the soil. The water pump is immediately turned off when the moisture
content is high enough. Smart home, smart city, smart transportation, and
smart farming are just a few of the new intelligent ideas that internet of things
(IoT) includes. The goal of this method is to increase productivity and
decrease manual labour among farmers. In this paper, we present a system for
monitoring and regulating water flow that employs a soil moisture sensor to
keep track of soil moisture content as well as the land’s water level to keep
track of and regulate the amount of water supplied to the plant. The device
also includes an automated led lighting system.
In order to provide sensing services to low-powered IoT devices, wireless sensor networks (WSNs) organize specialized transducers into networks. Energy usage is one of the most important design concerns in WSN because it is very hard to replace or recharge the batteries in sensor nodes. For an energy-constrained network, the clustering technique is crucial in preserving battery life. By strategically selecting a cluster head (CH), a network's load can be balanced, resulting in decreased energy usage and extended system life. Although clustering has been predominantly used in the literature, the concept of chain-based clustering has not yet been explored. As a result, in this paper, we employ a chain-based clustering architecture for data dissemination in the network. Furthermore, for CH selection, we employ the coati optimisation algorithm, which was recently proposed and has demonstrated significant improvement over other optimization algorithms. In this method, the parameters considered for selecting the CH are energy, node density, distance, and the network’s average energy. The simulation results show tremendous improvement over the competitive cluster-based routing algorithms in the context of network lifetime, stability period (first node dead), transmission rate, and the network's power reserves.
The construction industry is an industry that is always surrounded by
uncertainties and risks. The industry is always associated with a threatindustry which has a complex, tedious layout and techniques characterized by
unpredictable circumstances. It comprises a variety of human talents and the
coordination of different areas and activities associated with it. In this
competitive era of the construction industry, delays and cost overruns of the
project are often common in every project and the causes of that are also
common. One of the problems which we are trying to cater to is the improper
handling of materials at the construction site. In this paper, we propose
developing a system that is capable of tracking construction material on site
that would benefit the contractor and client for better control over inventory
on-site and to minimize loss of material that occurs due to theft and misplacing
of materials.
Today, health monitoring relies heavily on technological advancements. This
study proposes a low-power wide-area network (LPWAN) based, multinodal
health monitoring system to monitor vital physiological data. The suggested
system consists of two nodes, an indoor node, and an outdoor node, and the
nodes communicate via long range (LoRa) transceivers. Outdoor nodes use an
MPU6050 module, heart rate, oxygen pulse, temperature, and skin resistance
sensors and transmit sensed values to the indoor node. We transferred the data
received by the master node to the cloud using the Adafruit cloud service. The
system can operate with a coverage of 4.5 km, where the optimal distance
between outdoor sensor nodes and the indoor master node is 4 km. To further
predict fall detection, various machine learning classification techniques have
been applied. Upon comparing various classifier techniques, the decision tree
method achieved an accuracy of 0.99864 with a training and testing ratio of
70:30. By developing accurate prediction models, we can identify high-risk
individuals and implement preventative measures to reduce the likelihood of
a fall occurring. Remote monitoring of the health and physical status of elderly
people has proven to be the most beneficial application of this technology.
The effectiveness of adaptive filters are mainly dependent on the design
techniques and the algorithm of adaptation. The most common adaptation
technique used is least mean square (LMS) due its computational simplicity.
The application depends on the adaptive filter configuration used and are well
known for system identification and real time applications. In this work, a
modified delayed μ-law proportionate normalized least mean square
(DMPNLMS) algorithm has been proposed. It is the improvised version of the
µ-law proportionate normalized least mean square (MPNLMS) algorithm.
The algorithm is realized using Ladner-Fischer type of parallel prefix
logarithmic adder to reduce the silicon area. The simulation and
implementation of very large-scale integration (VLSI) architecture are done
using MATLAB, Vivado suite and complementary metal–oxide–
semiconductor (CMOS) 90 nm technology node using Cadence RTL and
Genus Compiler respectively. The DMPNLMS method exhibits a reduction
in mean square error, a higher rate of convergence, and more stability. The
synthesis results demonstrate that it is area and delay effective, making it
practical for applications where a faster operating speed is required.
The increasing demand for faster, robust, and efficient device development of enabling technology to mass production of industrial research in circuit design deals with challenges like size, efficiency, power, and scalability. This paper, presents a design and analysis of low power high speed full adder using negative capacitance field effecting transistors. A comprehensive study is performed with adiabatic logic and reversable logic. The performance of full adder is studied with metal oxide field effect transistor (MOSFET) and negative capacitance field effecting (NCFET). The NCFET based full adder offers a low power and high speed compared with conventional MOSFET. The complete design and analysis are performed using cadence virtuoso. The adiabatic logic offering low delay of 0.023 ns and reversable logic is offering low power of 7.19 mw.
The global agriculture system faces significant challenges in meeting the
growing demand for food production, particularly given projections that the
world's population will reach 70% by 2050. Hydroponic farming is an
increasingly popular technique in this field, offering a promising solution to
these challenges. This paper will present the improvement of the current
traditional hydroponic method by providing a system that can be used to
monitor and control the important element in order to help the plant grow up
smoothly. This proposed system is quite efficient and user-friendly that can
be used by anyone. This is a combination of a traditional hydroponic system,
an automatic control system and a smartphone. The primary objective is to
develop a smart system capable of monitoring and controlling potential
hydrogen (pH) levels, a key factor that affects hydroponic plant growth.
Ultimately, this paper offers an alternative approach to address the challenges
of the existing agricultural system and promote the production of clean,
disease-free, and healthy food for a better future.
More from International Journal of Reconfigurable and Embedded Systems (20)
Applications of artificial Intelligence in Mechanical Engineering.pdfAtif Razi
Historically, mechanical engineering has relied heavily on human expertise and empirical methods to solve complex problems. With the introduction of computer-aided design (CAD) and finite element analysis (FEA), the field took its first steps towards digitization. These tools allowed engineers to simulate and analyze mechanical systems with greater accuracy and efficiency. However, the sheer volume of data generated by modern engineering systems and the increasing complexity of these systems have necessitated more advanced analytical tools, paving the way for AI.
AI offers the capability to process vast amounts of data, identify patterns, and make predictions with a level of speed and accuracy unattainable by traditional methods. This has profound implications for mechanical engineering, enabling more efficient design processes, predictive maintenance strategies, and optimized manufacturing operations. AI-driven tools can learn from historical data, adapt to new information, and continuously improve their performance, making them invaluable in tackling the multifaceted challenges of modern mechanical engineering.
Levelised Cost of Hydrogen (LCOH) Calculator ManualMassimo Talia
The aim of this manual is to explain the
methodology behind the Levelized Cost of
Hydrogen (LCOH) calculator. Moreover, this
manual also demonstrates how the calculator
can be used for estimating the expenses associated with hydrogen production in Europe
using low-temperature electrolysis considering different sources of electricity
Impartiality as per ISO /IEC 17025:2017 StandardMuhammadJazib15
This document provides basic guidelines for imparitallity requirement of ISO 17025. It defines in detial how it is met and wiudhwdih jdhsjdhwudjwkdbjwkdddddddddddkkkkkkkkkkkkkkkkkkkkkkkwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwioiiiiiiiiiiiii uwwwwwwwwwwwwwwwwhe wiqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq gbbbbbbbbbbbbb owdjjjjjjjjjjjjjjjjjjjj widhi owqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq uwdhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhwqiiiiiiiiiiiiiiiiiiiiiiiiiiiiw0pooooojjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjj whhhhhhhhhhh wheeeeeeee wihieiiiiii wihe
e qqqqqqqqqqeuwiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiqw dddddddddd cccccccccccccccv s w c r
cdf cb bicbsad ishd d qwkbdwiur e wetwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwwww w
dddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddfffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffw
uuuuhhhhhhhhhhhhhhhhhhhhhhhhe qiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiiii iqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq eeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeeee qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc ccccccccccccccccccccccccccccccccccc bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbu uuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuuum
m
m mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm m i
g i dijsd sjdnsjd ndjajsdnnsa adjdnawddddddddddddd uw
Height and depth gauge linear metrology.pdfq30122000
Height gauges may also be used to measure the height of an object by using the underside of the scriber as the datum. The datum may be permanently fixed or the height gauge may have provision to adjust the scale, this is done by sliding the scale vertically along the body of the height gauge by turning a fine feed screw at the top of the gauge; then with the scriber set to the same level as the base, the scale can be matched to it. This adjustment allows different scribers or probes to be used, as well as adjusting for any errors in a damaged or resharpened probe.
Null Bangalore | Pentesters Approach to AWS IAMDivyanshu
#Abstract:
- Learn more about the real-world methods for auditing AWS IAM (Identity and Access Management) as a pentester. So let us proceed with a brief discussion of IAM as well as some typical misconfigurations and their potential exploits in order to reinforce the understanding of IAM security best practices.
- Gain actionable insights into AWS IAM policies and roles, using hands on approach.
#Prerequisites:
- Basic understanding of AWS services and architecture
- Familiarity with cloud security concepts
- Experience using the AWS Management Console or AWS CLI.
- For hands on lab create account on [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
# Scenario Covered:
- Basics of IAM in AWS
- Implementing IAM Policies with Least Privilege to Manage S3 Bucket
- Objective: Create an S3 bucket with least privilege IAM policy and validate access.
- Steps:
- Create S3 bucket.
- Attach least privilege policy to IAM user.
- Validate access.
- Exploiting IAM PassRole Misconfiguration
-Allows a user to pass a specific IAM role to an AWS service (ec2), typically used for service access delegation. Then exploit PassRole Misconfiguration granting unauthorized access to sensitive resources.
- Objective: Demonstrate how a PassRole misconfiguration can grant unauthorized access.
- Steps:
- Allow user to pass IAM role to EC2.
- Exploit misconfiguration for unauthorized access.
- Access sensitive resources.
- Exploiting IAM AssumeRole Misconfiguration with Overly Permissive Role
- An overly permissive IAM role configuration can lead to privilege escalation by creating a role with administrative privileges and allow a user to assume this role.
- Objective: Show how overly permissive IAM roles can lead to privilege escalation.
- Steps:
- Create role with administrative privileges.
- Allow user to assume the role.
- Perform administrative actions.
- Differentiation between PassRole vs AssumeRole
Try at [killercoda.com](https://killercoda.com/cloudsecurity-scenario/)
Open Channel Flow: fluid flow with a free surfaceIndrajeet sahu
Open Channel Flow: This topic focuses on fluid flow with a free surface, such as in rivers, canals, and drainage ditches. Key concepts include the classification of flow types (steady vs. unsteady, uniform vs. non-uniform), hydraulic radius, flow resistance, Manning's equation, critical flow conditions, and energy and momentum principles. It also covers flow measurement techniques, gradually varied flow analysis, and the design of open channels. Understanding these principles is vital for effective water resource management and engineering applications.
Determination of Equivalent Circuit parameters and performance characteristic...pvpriya2
Includes the testing of induction motor to draw the circle diagram of induction motor with step wise procedure and calculation for the same. Also explains the working and application of Induction generator
Custom administering attention module for segmentation of magnetic resonance imaging of the brain
1. International Journal of Reconfigurable and Embedded Systems (IJRES)
Vol. 12, No. 3, November 2023, pp. 376~383
ISSN: 2089-4864, DOI: 10.11591/ijres.v12.i3pp376-383 376
Journal homepage: http://ijres.iaescore.com
Custom administering attention module for segmentation of
magnetic resonance imaging of the brain
Nagaveni B. Sangolgi1
, Sasikala2
1
Department Electronics and Communication, Faculty of Engineering and Technology, Sharnbasva University, Kalaburagi, India
2
Godutai Engineering College, Osmania University, kalaburagi, India
Article Info ABSTRACT
Article history:
Received Jul 27, 2022
Revised Feb 24, 2023
Accepted Mar 24, 2023
Taking into account how brain tumors and gliomas are notorious forms of
cancer, the medical field has found several methods to diagnose these
diseases, with many algorithms that can segment out the cancer cells in the
magnetic resonance imaging (MRI) scans of the brain. This paper has
proposed a similar segmenting algorithm called a custom administering
attention module. This solution uses a custom U-Net model along with a
custom administering attention module that uses an attention mechanism to
classify and segment the glioma cells using long-range dependency of the
feature maps. The customizations lead to a reduction in code complexity and
memory cost. The final model has been tested on the BraTS 2019 dataset and
has been compared with other state-of-the-art methods for displaying how
much better the proposed model has performed in the category of enhancing,
non-enhancing and peritumoral gliomas.
Keywords:
Custom administering attention
module
Feature maps
Long-range dependency
Magnetic resonance imaging
U-Net model This is an open access article under the CC BY-SA license.
Corresponding Author:
Nagaveni B. Sangolgi
Department of Electronics and Communication, Faculty of Engineering and Technology
Sharnbasva University
Kalaburagi, India
Email: nagvenibs@rediffmail.com
1. INTRODUCTION
Gliomas are a type of tumor that is an outgrowth of neural cells in the spinal cord and the brain. In the
beginning stages, the glial cells, that is, the starting glioma cells surround the nerve and help in functioning.
This is the reason why they are so hard to early diagnose this type of cancer. A physical or medical exam is
required to check for any coordination deprivation, reflex and speech mismanagement, and other behavioral
patterns, such as a magnetic resonance imaging (MRI), computed tomography (CT) scan, or biopsy. The
problems faced by the state-of-the-art algorithms for MRI image segmentation are the background noise,
lesions being too pixelated, and the gliomas being too fused into the brain cells that segmentation is not
possible. Many algorithms have found solutions to these problems; most of them employed convolutional
neural networks and variations of the same. The research is U-Net encoder-decoder dense predictor, which is
a fully convolutional neural network that employs an end-to-end semantic segmentation mechanism [1]-[4].
Other similar methods are [5]–[12] which uses the 3-D U-Net architecture and encoder-decoder, with [13]
using skip connections. Another problem pops out of the methods that are the problem of small-sized kernels
caused due to the large distance connection between voxels. This leads to suppressing of under or over-
segmentation. One way to solve this is to stack up the convolutional layers but computational and memory
costs increase exponentially. Yu and Koltun [14] have found a way to solve this by implementing dilated
convolution that helps in expanding the receptive of the local feature scale. The non-local neural network
implementation of double matrix multiplication that provides the pixel relation in feature and other maps [15].
2. Int J Reconfigurable & Embedded Syst ISSN: 2089-4864
Custom administering attention module for segmentation of magnetic … (Nagaveni B. Sangolgi)
377
With that, the weight coefficient is found that helps in obtaining global information. They developed a 3-D U-
Net combination with a self-attention method that provides global information by making the attention-grating
module that is a part of embedded skip connections [16]. Even with so many methods, the problems of matrix
multiplication framework, that is, instability of the attention map and the increased code complexity. The code
complexity reaches out to be O (n2
) for 2-D images with a huge GPU memory consumption. Moreso, the 3-D
images have it at higher stakes as a context prior layer is referencing the attention mechanism.
Now the proposed solution of this paper aims to solve the problems posed. Hence, the model named
custom administering attention module, which uses a convolution that forms an attention map using the MRI
scans as input, followed by the serial convolution along with truth accuracy guiding administering. Figure 1
shows the visual representation of the proposed model. Then, an endo-class updating method is introduced that
makes the feature map reworked and self-categorizes without undergoing a second matrix multiplication,
which helps in reducing code complexity and computational costs.
Figure 1. This image is a visual representation of the proposed model
In explaining the Figure 1, the purple arrows depict matrix convolution with the leftmost block being
the dropout and the red block being the soft-max. This paper is divided into 5 sections. Section 1 handles the
introduction; in section 2 handles all the related work and the beginning of the proposed model. Section 3 is
the mathematical analysis of the mode while section 4 displays the results and comparisons with other state-
of-the-art methods. In section 5 is the conclusion and future scope, which is then followed by the reference
section.
2. RELATED WORK
The MRI scans obtained in hospitals are filled with noise and diffused pixels that trained professionals
can only demarcate. For algorithms to do the same, methods such as clustering [17], thresholding [15], and so
on. Ronneberger et al. [3] handle 2-D and 3-D images using U-Net-based skip connections. The features are
extracted and classified based on pixels. The researchers [18]–[20] recover these features by modifying the
basic unit and then [21], [22] extract the richer features.
Here this survey checks on the different large-scale training samples that have high labeling medical
costs and little supervision which is an issue in segmenting and classifying images [23]. There is another survey
on the different types of gliomas and how different machine-learning models are there that have effectively
segmented the scans of these lesions [24]. They developed another peritumoral paper that uses an algorithm
that segments those tumors with a specified radius and has also been tested on 285 mp MRI scans [25]. This
paper gives a survey that samples different deep-learning methods for brain lesion scan classification as well
as compares different state-of-the-art methods [26]. They use a multimodal disentangled variational
3. ISSN: 2089-4864
Int J Reconfigurable & Embedded Syst, Vol. 12, No. 3, November 2023: 376-383
378
autoencoder (MMD-VAE) that extracts features from preoperative multimodal MRI scans [27]. It quantizes
the radionics and then the latent representations of the variational autoencoder are disassembled into distinctive
and common representations to obtain the shared and complementary data among modalities. To increase
effectiveness, it uses cross-modality reconstruction and common distinctive loss. Then shapely additive
explanations (SHAP) are employed to implement quantitatively interpret and analyze the important features to
grade them. Then the method is tested on two public databases and provided good accuracy results. CCNet is
great in reducing complexity that decreases the attention map’s two dimensions that include 3-D images and
local map feature map [28] is chosen in such a way that makes where M ∈ R and follows X × Y × Z, where it
means spatial height, spatial width, and channel number respectively. Then, the attention model applies the
1 × 1 convolutions, that is, Yp, Yq and Yo respectively and a reshape operation that will provide the matrices
{P, Q, O} and it belongs to RZ′×X×Y
, where Z′
is the channel number after the convolution and the three matrices
can be written as (1):
P = R (Yp(M)) , Q = R (Yq(M)) , O = R(Yo(M)) (1)
then, the SoftMax normalization function f is used and matrix multiplication to obtain the attention map A ∈
RN×N
,
A = f(P × HT
) (2)
then, using reformation and multiplication function, a similarity matrix B will be constructed and the result
comes out to be.
FR = R(B × A) + X (3)
There are some drawbacks to this formulation, one being that if the input is larger than expected the
complexity skyrockets and the attention map’s memory cost is heavy. Reduction of the attention map size is
very important and thus increases effectiveness and reduces code complexity. Jiang et al. [29] is useful for
solving these new issues as it applies context prior layer and uses convolution to remake the map Q that has the
same size as A,
Q = R(conv(M)) (4)
ground truth GT is now encoded with the one-hot encoding and the formula obtained is D ∈ RX×Y×Cl
, Cl
denoting the number of classes. Then affinity map is obtained by (5).
S ∈ RCl×X×Y
D = onehot(GT)
S = D × DT
(5)
Context prior layer is used for supervising the affinity map and this increases accuracy and reduces
instability. To obtain the leftover information, the size is reduced using the CP map while it groups up similar
pixels together, but this increases computational complexity and cost. So now the CP turns the feature map M
to Q using a convolution layer and this changes the physical layers and the problems that arise due to this
conversion make the first dimension of the CP map the channel dimension. This is not the same as the
relationship between two points’ spatial dimensions. The only problem here is that the variables are shared
before the update has even taken place. All attention encoder-decoder map, an attention U-Net that employs
an attention gate module that looks out for specific shapes and sizes and self and multi-scale attention maps
[29]–[32]. With the help of attention weight, the supervision information is used to reduce the code complexity
of matrix multiplication. To maintain consistency of a section, endo-class distance is needed, and exo-class
optimization helps in amplifying the differences between the classes. Ming et al. [33] show a triple loss based
on endo and exo classes. The endo-class consistency between pixels as well as the reworked exo-class distance
is managed using semantic segmentation [34] and has been of great help in solving this dilemma of
inconsistency and obtaining boundary knowledge. The other problems are solved using global semantic
information and then obtain the prototype vector for each type of class, followed by the re-weighting and auto-
correlation of the attention mechanism is helped in optimization.
4. Int J Reconfigurable & Embedded Syst ISSN: 2089-4864
Custom administering attention module for segmentation of magnetic … (Nagaveni B. Sangolgi)
379
3. MATHEMATICAL ANALYSIS
3.1. Overview
The proposed model has used the category-guided attention U-Net as a base reference to formulate
the solution. Input is sent through an attention convolution path to generate an attention map and this map is
used for helping in updating the endo-class process and exo-class distance optimization. In the entire process,
3 losses occur, namely main loss (ML), attention map loss (AL), and inter-class loss (IL), the main being
corrected using SoftMax dice loss and the adoption of mean squared error (MSE) loss for attention map
supervision. Then there is reconstruction, reformation, and involvement of unconventional attention
convolution path for the avoidance of any problems and issues that may result from the after-mentioned
optimizations. There are two stages of optimizing loss. The first 20 passes involve the addition of AL and ML
while the next 20 passes calculate the total loss sum. A weight is added to keep the value of IL at par with the
others, while the weights added to the other losses are constant and the same. The feature map is updated based
on categories which further helps in simplifying the code.
3.2. Updating the endo-class
The endo-class updating method is a feature of the proposed model that works on the feature map,
having total X × Y vectors vectored pixels of 1 × Z, as M ∈ RX×Y×Z
, the veil Vi ∈ RX×Y
that is referenced to all
parts of the attention map when it's dismantled. Qi is a prototype vector for veiled average pooling which is
shown as (6):
Qi =
1
∑ mi
X×Y
i=1
∑ vi × wi
X×Y
i=1 (6)
here, wi is the weight in veil i and vi is the feature map’s vector. Then the prototype vector is mapped back to
the size of the lower feature map according to the veil as (7):
mi = map(Qi, Vi) (7)
here, mi is the mapped feature map vector. Finally, all section maps are added to aggregate pixels of the same
type.
3.3. Updating the exo-class
To categorize and segregate the same pixels, this method has been introduced which optimized the
space between these pixels. The Euclidean distance between the pixels is calculated and then all of them are
summed up to represent the total exo-class feature map distance, which is shown as (8):
L = ∑ E(Qi, Qj)
0<i,j<C (8)
here, E is the Euclidean distance between vectors. To increase this value, an exo-class loss is also introduced
as (9):
lexo = log(
1
L+1
) (9)
the less l can be further reduced in the training for distance optimization.
3.4. Attention map update
This will automatically result in the attention map by using the given input, using serial convolutions
use Figure 2. The size of the map is reduced to X × Y × C from N × N, where C is the number of classes. The
one-hot form is formulated by encoding the down-sampled ground truth and then section-guided map SGM and
is the same size as A. The 1 × C vectors are valued high (at 1) while the rest are valued low (at 0) ad each
stream of SGM is part of a category of pixel distributions. Then the section-guided map trains the attention map.
The attention map representation is given by (10).
A = conv(J) (10)
4. RESULTS AND EVALUATION
For testing purposes, the model has been tested using the BraTS 2019 [3] dataset. The results were
then compared with the result of the paper [35], that is, the U-Net++model, [36] the U-Net architecture model,
5. ISSN: 2089-4864
Int J Reconfigurable & Embedded Syst, Vol. 12, No. 3, November 2023: 376-383
380
and [3], [37]–[39] the one inspired by U-Net. The BraTS 2019 [3] dataset evaluates the state-of-the-art methods
for image segmentation of the MRI of the brain. It is a multi-institutional pre-operative scan and focuses on
the segmentation of intrinsically heterogeneous gliomas. Table 1 given below compares the value sensitivity
and specificity, dice score, and 95% Hausdorff distance for the enhancing, non-enhancing and peri-tumoral
gliomas (acronyms for them being ETA, NTA, and PTA respectively). The best values attained for each
category for different parameters are marked in bold. Table 1 compares the values of sensitivity and specificity
of the various state-of-the-art methods, the reference paper used, and then the proposed model of this paper.
Figure 2 shows a sensitivity comparison of the various state-of-the-art methods along with the base paper and
or proposed model. Figure 3 shows a specificity comparison of the various state-of-the-art methods, the base
paper, and or proposed model. The proposed model of this paper has performed the best in almost all the
categories and parameters.
Table 1. This table compares the values of sensitivity and specificity of the various state-of-the-art methods,
the reference paper used, and then the proposed model of this paper
Sensitivity Specificity
Method ETA NTA PTA ETA NTA PTA
Amian and Soltaninejad [37] 0.68 0.82 0.74 1 0.99 1
Wang et al. [38] 0.766 0.897 0.826 0.998 0.995 0.996
Hamghalam et al. [39] 0.769 0.913 0.777 0.999 0.994 0.998
Micallef et al. [35] 0.723 0.867 0.763 0.998 0.994 0.997
Proposed solution 0.922 0.88 0.888 0.999 0.999 0.999
Figure 2. Sensitivity comparison of various state-of-the-art methods, the base paper, and or proposed model
Figure 3. Specificity comparison of various state-of-the-art methods, the base paper, and or proposed model
6. Int J Reconfigurable & Embedded Syst ISSN: 2089-4864
Custom administering attention module for segmentation of magnetic … (Nagaveni B. Sangolgi)
381
Hamghalam et al. [39] has proven that glioma pixels are more the half of the population in the
pathological sample of pixels, making the NTA have the maximum specificity value. ETA has performed
poorly due to the low-grade glioma cells not being so clearly demarcated from the brain cells that segmentation
is nearly impossible. High-grade gliomas on the other hand have great contour shapes and are well defined to
give an output of higher values. The dice coefficient values show that the segmentation accuracy for the
proposed model is quite high. Hausdorff distance for NTA and PTA has high amounts of deviations along with
great outliers; hence, we take 95% of the largest segmentation error. Table 2 has 10 images from the BraTS
2019 dataset that the proposed solution has worked on. As can be seen, the result of the model and the ground
truth are quite close, making it highly accurate.
Table 2. 10 images from the BraTS 2019 dataset that the proposed solution has worked on
The images from the BraTS
2019 dataset (MRI scans)
The ground truth of the
glioma areas
The results of the base
reference paper
The final results were obtained by
using the images of row ‘a’ as input
5. CONCLUSION AND FUTURE SCOPE
The existing system used a two-way pipeline with a full-resolution, input image. Hausdorff distance
values for this model have not been the best, especially for non-enhancing tumors, due to its diffused pixels. It
has performed much better in those areas where the proposed solution has not done its best. As it can be
concluded, the custom administering attention module has outperformed several state-of-the-art algorithms in
terms of sensitivity, specificity, dice coefficient, accuracy, and efficiency. The BraTS 2019 database is a great
choice for training due to its extensive samples and the proposed model has provided almost accurate results
in comparison with the ground truth. There are some ways to improve this model. One of them is to involve
pipelining and clustering pixels and making it more intense, thus helping in segmenting the more diffused NTA
and peritumoral cells.
7. ISSN: 2089-4864
Int J Reconfigurable & Embedded Syst, Vol. 12, No. 3, November 2023: 376-383
382
REFERENCES
[1] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE Computer Society
Conference on Computer Vision and Pattern Recognition, Jun. 2016, vol. 2016-December, pp. 770–778, doi:
10.1109/CVPR.2016.90.
[2] R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic
segmentation,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Jun. 2014,
pp. 580–587, doi: 10.1109/CVPR.2014.81.
[3] O. Ronneberger, P. Fischer, and T. Brox, “U-Net: convolutional networks for biomedical image segmentation,” in Lecture Notes in
Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 9351,
2015, pp. 234–241, doi: 10.1007/978-3-319-24574-4_28.
[4] J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” in Proceedings of the IEEE
Computer Society Conference on Computer Vision and Pattern Recognition, Jun. 2015, vol. 07-12-June-2015, pp. 431–440, doi:
10.1109/CVPR.2015.7298965.
[5] N. Nuechterlein and S. Mehta, “3D-ESPNet with pyramidal refinement for volumetric brain tumor image segmentation,” in Lecture
Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol.
11384 LNCS, 2019, pp. 245–253, doi: 10.1007/978-3-030-11726-9_22.
[6] W. Chen, B. Liu, S. Peng, J. Sun, and X. Qiao, “S3-D U-Net: Separable 3-D U-Net for brain tumor segmentation,” in Lecture Notes
in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 11384
LNCS, 2019, pp. 358–368, doi: 10.1007/978-3-030-11726-9_32.
[7] M. Bendszus and K. H. M. Hein, F. Isensee, P. Kickingereder, and W. Wick, “No new-net,” BrainLes 2018: Brainlesion: Glioma,
Multiple Sclerosis, Stroke and Traumatic Brain Injuries, 2018, pp 234–244, doi: 10.1007/978-3-030-11726-9_21.
[8] A. Myronenko, “3D MRI brain tumor segmentation using autoencoder regularization,” in Lecture Notes in Computer Science
(including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 11384 LNCS, 2019, pp. 311–
320, doi: 10.1007/978-3-030-11726-9_28.
[9] Z. Jiang, C. Ding, M. Liu, and D. Tao, “Two-stage cascaded U-Net: 1st place solution to brats challenge 2019 segmentation task,”
in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in
Bioinformatics), vol. 11992 LNCS, 2020, pp. 231–241, doi: 10.1007/978-3-030-46640-4_22.
[10] M. Islam, V. S. Vibashan, V. J. M. Jose, N. Wijethilake, U. Utkarsh, and H. Ren, “Brain tumor segmentation and survival prediction
using 3D attention U-Net,” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and
Lecture Notes in Bioinformatics), vol. 11992 LNCS, 2020, pp. 262–272, doi: 10.1007/978-3-030-46640-4_25.
[11] C. Chen, X. Liu, M. Ding, J. Zheng, and J. Li, “3D dilated multi-fiber network for real-time brain tumor segmentation in MRI,” in
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in
Bioinformatics), vol. 11766 LNCS, 2019, pp. 184–192, doi: 10.1007/978-3-030-32248-9_21.
[12] H. Xu, H. Xie, Y. Liu, C. Cheng, C. Niu, and Y. Zhang, “Deep cascaded attention network for multi-task brain tumor segmentation,”
in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in
Bioinformatics), vol. 11766 LNCS, 2019, pp. 420–428, doi: 10.1007/978-3-030-32248-9_47.
[13] Ö. Çiçek, A. Abdulkadir, S. S. Lienkamp, T. Brox, and O. Ronneberger, “3-D U-net: learning dense volumetric segmentation from
sparse annotation,” MICCAI 2016: Medical Image Computing and Computer-Assisted Intervention – MICCAI 2016, 2016, pp. 424–
432, doi: 10.1007/978-3-319-46723-8_49.
[14] F. Yu and V. Koltun, “Multi-scale context aggregation by dilated convolutions,” arXiv preprint arXiv:1511.07122, 2016, doi:
10.48550/arXiv.1511.07122.
[15] X. Wang, R. Girshick, A. Gupta, and K. He, “Non-local neural networks,” in Proceedings of the IEEE Computer Society Conference
on Computer Vision and Pattern Recognition, Jun. 2018, pp. 7794–7803, doi: 10.1109/CVPR.2018.00813.
[16] O. Oktay et al., “Attention U-Net: Learning where to look for the pancreas,” arXiv preprint arXiv:1804.03999, 2018, doi:
10.48550/arXiv.1804.03999.
[17] S. Manikandan, K. Ramar, M. W. Iruthayarajan, and K. G. Srinivasagan, “Multilevel thresholding for segmentation of medical
brain images using real coded genetic algorithm,” Measurement: Journal of the International Measurement Confederation, vol. 47,
no. 1, pp. 558–568, Jan. 2014, doi: 10.1016/j.measurement.2013.09.031.
[18] X. Xiao, S. Lian, Z. Luo, and S. Li, “Weighted Res-U-Net for high-quality retina vessel segmentation,” in Proceedings - 9th
International Conference on Information Technology in Medicine and Education, ITME 2018, Oct. 2018, pp. 327–331, doi:
10.1109/ITME.2018.00080.
[19] S. Cai, Y. Tian, H. Lui, H. Zeng, Y. Wu, and G. Chen, “Dense-U-Net: a novel multiphoton in vivo cellular image segmentation
model based on a convolutional neural network,” Quantitative Imaging in Medicine and Surgery, vol. 10, no. 6, pp. 1275–1285,
Jun. 2020, doi: 10.21037/QIMS-19-1090.
[20] N. Ibtehaz and M. S. Rahman, “MultiRes U-Net: rethinking the U-Net architecture for multimodal biomedical image segmentation,”
Neural Networks, vol. 121, pp. 74–87, Jan. 2020, doi: 10.1016/j.neunet.2019.08.025.
[21] Z. Zhou, M. M. R. Siddiquee, N. Tajbakhsh, and J. Liang, “U-Net++: a nested U-Net architecture for medical image segmentation,”
in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in
Bioinformatics), vol. 11045 LNCS, 2018, pp. 3–11, doi: 10.1007/978-3-030-00889-5_1.
[22] H. Huang et al., “U-Net 3+: A Full-scale connected U-Net for medical image segmentation,” in ICASSP, IEEE International
Conference on Acoustics, Speech and Signal Processing - Proceedings, May 2020, vol. 2020-May, pp. 1055–1059, doi:
10.1109/ICASSP40776.2020.9053405.
[23] J. Peng and Y. Wang, “Medical image segmentation with limited supervision: A review of deep network models,” IEEE Access,
vol. 9, pp. 36827–36851, 2021, doi: 10.1109/ACCESS.2021.3062380.
[24] R. Yousef, G. Gupta, C. H. Vanipriya, and N. Yousef, “A comparative study of different machine learning techniques for brain
tumor analysis,” Materials Today: Proceedings, Apr. 2021, doi: 10.1016/j.matpr.2021.03.303.
[25] J. Cheng, J. Liu, H. Yue, H. Bai, Y. Pan, and J. Wang, “Prediction of glioma grade using intratumoral and peritumoral radiomic
features from multiparametric MRI images,” IEEE/ACM Transactions on Computational Biology and Bioinformatics, vol. 19, no.
2, pp. 1084–1095, 2022, doi: 10.1109/TCBB.2020.3033538.
[26] K. Muhammad, S. Khan, J. D. Ser, and V. H. C. D. Albuquerque, “Deep learning for multigrade brain tumor classification in smart
healthcare systems: A prospective survey,” IEEE Transactions on Neural Networks and Learning Systems, vol. 32, no. 2, pp. 507–
522, Feb. 2021, doi: 10.1109/TNNLS.2020.2995800.
8. Int J Reconfigurable & Embedded Syst ISSN: 2089-4864
Custom administering attention module for segmentation of magnetic … (Nagaveni B. Sangolgi)
383
[27] J. Cheng et al., “Multimodal disentangled variational autoencoder with game theoretic interpretability for glioma grading,” IEEE
Journal of Biomedical and Health Informatics, vol. 26, no. 2, pp. 673–684, Feb. 2022, doi: 10.1109/JBHI.2021.3095476.
[28] C. Yu, J. Wang, C. Gao, G. Yu, C. Shen, and N. Sang, “Context prior for scene segmentation,” in Proceedings of the IEEE Computer
Society Conference on Computer Vision and Pattern Recognition, 2020, pp. 12413–12422, doi: 10.1109/CVPR42600.2020.01243.
[29] H. Jiang, T. Shi, Z. Bai, and L. Huang, “AHCNet: An application of attention mechanism and hybrid connection for liver tumor
segmentation in CT volumes,” IEEE Access, vol. 7, pp. 24898–24909, 2019, doi: 10.1109/ACCESS.2019.2899608.
[30] N. Abraham and N. M. Khan, “A novel focal tversky loss function with improved attention U-Net for lesion segmentation,” in
Proceedings - International Symposium on Biomedical Imaging, Apr. 2019, vol. 2019-April, pp. 683–687, doi:
10.1109/ISBI.2019.8759329.
[31] Z. L. Ni et al., “RAUNet: Residual attention U-Net for semantic segmentation of cataract surgical instruments,” in Lecture Notes
in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 11954
LNCS, 2019, pp. 139–149, doi: 10.1007/978-3-030-36711-4_13.
[32] A. Sinha and J. Dolz, “Multi-scale self-guided attention for medical image segmentation,” IEEE Journal of Biomedical and Health
Informatics, vol. 25, no. 1, pp. 121–130, Jan. 2021, doi: 10.1109/JBHI.2020.2986926.
[33] Z. Ming, J. Chazalon, M. M. Luqman, M. Visani, and J. C. Burie, “Simple triplet loss based on intra/inter-class metric learning for
face verification,” in Proceedings - 2017 IEEE International Conference on Computer Vision Workshops, ICCVW 2017, Oct. 2017,
vol. 2018-Janua, pp. 1656–1664, doi: 10.1109/ICCVW.2017.194.
[34] C. Yu, J. Wang, C. Peng, C. Gao, G. Yu, and N. Sang, “Learning a discriminative feature network for semantic segmentation,” in
Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2018, pp. 1857–1866, doi:
10.1109/CVPR.2018.00199.
[35] N. Micallef, D. Seychell, and C. J. Bajada, “Exploring the U-Net++ model for automatic brain tumor segmentation,” IEEE Access,
vol. 9, pp. 125523–125539, 2021, doi: 10.1109/ACCESS.2021.3111131.
[36] F. Isensee, P. Kickingereder, W. Wick, M. Bendszus, and K. H. Maier-Hein, “Brain tumor segmentation and radiomics survival
prediction: Contribution to the BRATS 2017 challenge,” in Lecture Notes in Computer Science (including subseries Lecture Notes
in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 10670 LNCS, 2018, pp. 287–297, doi: 10.1007/978-3-319-
75238-9_25.
[37] M. Amian and M. Soltaninejad, “Multi-resolution 3D CNN for MRI brain tumor segmentation and survival prediction,” in Lecture
Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol.
11992 LNCS, 2020, pp. 221–230, doi: 10.1007/978-3-030-46640-4_21.
[38] F. Wang, R. Jiang, L. Zheng, C. Meng, and B. Biswal, “3-D U-Net based brain tumor segmentation and survival days prediction,”
in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in
Bioinformatics), vol. 11992 LNCS, 2020, pp. 131–141, doi: 10.1007/978-3-030-46640-4_13.
[39] M. Hamghalam, B. Lei, and T. Wang, “Brain tumor synthetic segmentation in 3D multimodal mri scans,” in Lecture Notes in
Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 11992
LNCS, 2020, pp. 153–162, doi: 10.1007/978-3-030-46640-4_15.
BIOGRAPHIES OF AUTHOR
Nagaveni B. Sangolgi working as assistant professor in the Department of
Electronics and Communications Engineering at Sharnbasva University, Kalaburagi. She
has received her M-Tech. She has published eight International Conferences.
Her area of interest is image processing. She can be contacted at this email:
nagvenimk289@gmail.com.
Dr. Sasikala completed bachelor of electrical in electrical engineering from
Osmania University Hyderabad in the year 1985 with first class distinction. Obtained ME
in the year 1987, in power systems from the Osmania University and awarded gold medal
for standing first in the University. Completed Ph.D. from JNTU Hyderabad in electrical
engineering in the year 2008. Working as Professor/Principal in various colleges from 2008
and currently working as principal of Godutai engineering College for Women, Sharnbasva
University. Kalaburagi’ since 2014. Has 3 decades (30 years) of teaching experience.
Published about 50 papers in various national and international journals and presented about
26 papers in both national and international conferences. She can be contacted at this email:
sasi_mun@rediffmail.com.