Abstract
This paper presents a new image processing technique for brain tissue segmentation, precisely,in order to recognize brain diseases. Automatic level set(ALS) is a powerful method for segmenting brain tissues in MR images that uses spatial Fuzzy C-Means (SFCM) to set initial contour near the object’s boundaries in order to increasing the speed of algorithm. Themethod efficiency depends on selecting the optimized amounts of controlling parameter. In this paper, the ALSis improved by optimal regulating of controlling parameters. The proposedmethod contains two phases. In the first phase,the initial contour of ALS determined via the SFCM and image features are extracted. Then, the optimal controlling parameters of ALS are determined by a genetic algorithm.By applying image features and optimal controlling parameters to the generalized regression neural network(GRNN), a neural system is trained. In the second phase, the initial contour is specified and image features are extracted as inputs to trained neural network from phase1. Thus, the outputs of neural network are used as ALS controlled parameters. The resultsshow that the accuracy of proposed ALS is improvedabout 1.4 %with respect to the ALS method. The proposed ALS not only retains the speed but also has a higher accuracy.
Telecardiology and Teletreatment System Design for Heart Failures Using Type-...Waqas Tariq
Proper diagnosis of heart failures is critical, since the appropriate treatments are strongly dependent upon the underlying cause. Furthermore, rapid diagnosis is also critical, since the effectiveness of some treatments depends upon rapid initiation. In this paper, a new web-based telecardiology system has been proposed for diagnosis, consultation, and treatment. The aim of this implemented telecardiology system is to help to practitioner doctor, if clinic findings of patient misgive heart failures. This model consists of three subsystems. The first subsystem divides into recording and preprocessing phase. Here, electrocardiography signal is recorded from emergency patient and this recorded signal is preprocessed for detection of RR interval. The second subsystem realizes classification of RR interval. In other words, this second subsystem is to diagnosis heart failures. In this study, a combined classification system has been designed using type-2 fuzzy c-means clustering (T2FCM) algorithm and neural networks. T2FCM was used to improve performance of neural networks which was obtained very high performance accuracy to classify RR intervals of ECG signals. This proposed automated telecardiology and diagnostic system assists to practitioner doctor to diagnosis heart failures easily. Training and testing data for this diagnostic system are included five ECG signal classes. The third subsystem is consultation and teletreatment between practitioner (or family) doctor and cardiologist worked in research hospital with prepared web page (www.telekardiyoloji.com). However, opportunity of signal’s evaluation is presented to practitioner and expert doctor with prepared interfaces. T2FCM is applied to the training data for the selection of best segments in the second subsystem. A new training set formed by these best segments was classified using the neural networks classifier which has backpropagation well-known algorithm and generalized delta rule learning. Recognition accuracy rate was found as 99% using proposed Type-2 Fuzzy Clustering Neural Networks (T2FCNN) method.
Classification of Abnormalities in Brain MRI Images Using PCA and SVMIJERA Editor
The impact of digital image processing is increasing by the day for its use in the medical and research areas. Medical image classification scheme has been on the increase in order to help physicians and medical practitioners in their evaluation and analysis of diseases. Several classification schemes such as Artificial Neural Network (ANN), Bayes Classification, Support Vector Machine (SVM) and K-Means Nearest Neighbor have been used. In this paper, we evaluate and compared the performance of SVM and PCA by analyzing diseased image of the brain (Alzheimer) and normal (MRI) brain. The results show that Principal Components Analysis outperforms the Support Vector Machine in terms of training time and recognition time.
Failure prediction of e-banking application system using adaptive neuro fuzzy...IJECEIAES
Problems often faced by IT operation unit is the difficulty in determining the cause of the failure of an incident such as slowing access to the internet banking url, non-functioning of some features of m-banking or even the cessation of the entire e-banking service. The proposed method to modify ANFIS with Fuzzy C-Means Clustering (FCM) approach is applied to detect four typical kinds of faults that may happen in the e-banking system, which are application response times, transaction per second, server utilization and network performance. Input data is obtained from the e-banking monitoring results throughout 2017 that become data training and data testing. The study shows that an ANFIS modeling with FCM optimized input has a RMSE 0.006 and increased accuracy by 1.27% compared to ANFIS without FCM optimization.
BEARINGS PROGNOSTIC USING MIXTURE OF GAUSSIANS HIDDEN MARKOV MODEL AND SUPPOR...IJNSA Journal
Prognostic of future health state relies on the estimation of the Remaining Useful Life (RUL) of physical
systems or components based on their current health state. RUL can be estimated by using three main
approaches: model-based, experience-based and data-driven approaches. This paper deals with a datadriven
prognostics method which is based on the transformation of the data provided by the sensors into
models that are able to characterize the behavior of the degradation of bearings.
For this purpose, we used Support Vector Machine (SVM) as modeling tool. The experiments on the
recently published data base taken from the platform PRONOSTIA clearly show the superiority of the
proposed approach compared to well established method in literature like Mixture of Gaussian Hidden
Markov Models (MoG-HMMs).
Comparison of Neural Network Training Functions for Hematoma Classification i...IOSR Journals
Classification is one of the most important task in application areas of artificial neural networks
(ANN).Training neural networks is a complex task in the supervised learning field of research. The main
difficulty in adopting ANN is to find the most appropriate combination of learning, transfer and training
function for the classification task. We compared the performances of three types of training algorithms in feed
forward neural network for brain hematoma classification. In this work we have selected Gradient Descent
based backpropagation, Gradient Descent with momentum, Resilence backpropogation algorithms. Under
conjugate based algorithms, Scaled Conjugate back propagation, Conjugate Gradient backpropagation with
Polak-Riebreupdates(CGP) and Conjugate Gradient backpropagation with Fletcher-Reeves updates (CGF).The
last category is Quasi Newton based algorithm, under this BFGS, Levenberg-Marquardt algorithms are
selected. Proposed work compared training algorithm on the basis of mean square error, accuracy, rate of
convergence and correctness of the classification. Our conclusion about the training functions is based on the
simulation results
Multistage Classification of Alzheimer’s DiseaseIJLT EMAS
Alzheimer’s disease is a type of dementia that destroys
memory and other mental functions. During the progression of
the disease certain proteins called plaques and tangles get
deposited in hippocampus which is located in the temporal lobe
of brain. The disease is not a normal part of aging and gets
worsen over time. Medical imaging techniques like Magnetic
Resonance Imaging (MRI), Computed Tomography (CT) and
Positron Emission Tomography (PET) play significant role in the
disease diagnosis. In this paper, we propose a method for
classifying MRI into Normal Control (NC), Mild Cognitive
Impairment (MCI) and Alzheimer’s Disease(AD). An overall
outline of the methodology includes textural feature extraction,
feature reduction process and classification of the images into
various stages. Classification has been performed with three
classifiers namely Support Vector Machine (SVM), Artificial
Neural Network (ANN) and k-Nearest Neighbours (k-NN)
Chronic Kidney Disease prediction is one of the most important issues in healthcare analytics. The most interesting and challenging tasks in day to day life is prediction in medical field. In this paper, we employ some machine learning techniques for predicting the chronic kidney disease using clinical data. We use three machine learning algorithms such as Decision Tree(DT) algorithm, Naive Bayesian (NB) algorithm. The performance of the above models are compared with each other in order to select the best classifier in predicting the chronic kidney disease for given dataset.
Telecardiology and Teletreatment System Design for Heart Failures Using Type-...Waqas Tariq
Proper diagnosis of heart failures is critical, since the appropriate treatments are strongly dependent upon the underlying cause. Furthermore, rapid diagnosis is also critical, since the effectiveness of some treatments depends upon rapid initiation. In this paper, a new web-based telecardiology system has been proposed for diagnosis, consultation, and treatment. The aim of this implemented telecardiology system is to help to practitioner doctor, if clinic findings of patient misgive heart failures. This model consists of three subsystems. The first subsystem divides into recording and preprocessing phase. Here, electrocardiography signal is recorded from emergency patient and this recorded signal is preprocessed for detection of RR interval. The second subsystem realizes classification of RR interval. In other words, this second subsystem is to diagnosis heart failures. In this study, a combined classification system has been designed using type-2 fuzzy c-means clustering (T2FCM) algorithm and neural networks. T2FCM was used to improve performance of neural networks which was obtained very high performance accuracy to classify RR intervals of ECG signals. This proposed automated telecardiology and diagnostic system assists to practitioner doctor to diagnosis heart failures easily. Training and testing data for this diagnostic system are included five ECG signal classes. The third subsystem is consultation and teletreatment between practitioner (or family) doctor and cardiologist worked in research hospital with prepared web page (www.telekardiyoloji.com). However, opportunity of signal’s evaluation is presented to practitioner and expert doctor with prepared interfaces. T2FCM is applied to the training data for the selection of best segments in the second subsystem. A new training set formed by these best segments was classified using the neural networks classifier which has backpropagation well-known algorithm and generalized delta rule learning. Recognition accuracy rate was found as 99% using proposed Type-2 Fuzzy Clustering Neural Networks (T2FCNN) method.
Classification of Abnormalities in Brain MRI Images Using PCA and SVMIJERA Editor
The impact of digital image processing is increasing by the day for its use in the medical and research areas. Medical image classification scheme has been on the increase in order to help physicians and medical practitioners in their evaluation and analysis of diseases. Several classification schemes such as Artificial Neural Network (ANN), Bayes Classification, Support Vector Machine (SVM) and K-Means Nearest Neighbor have been used. In this paper, we evaluate and compared the performance of SVM and PCA by analyzing diseased image of the brain (Alzheimer) and normal (MRI) brain. The results show that Principal Components Analysis outperforms the Support Vector Machine in terms of training time and recognition time.
Failure prediction of e-banking application system using adaptive neuro fuzzy...IJECEIAES
Problems often faced by IT operation unit is the difficulty in determining the cause of the failure of an incident such as slowing access to the internet banking url, non-functioning of some features of m-banking or even the cessation of the entire e-banking service. The proposed method to modify ANFIS with Fuzzy C-Means Clustering (FCM) approach is applied to detect four typical kinds of faults that may happen in the e-banking system, which are application response times, transaction per second, server utilization and network performance. Input data is obtained from the e-banking monitoring results throughout 2017 that become data training and data testing. The study shows that an ANFIS modeling with FCM optimized input has a RMSE 0.006 and increased accuracy by 1.27% compared to ANFIS without FCM optimization.
BEARINGS PROGNOSTIC USING MIXTURE OF GAUSSIANS HIDDEN MARKOV MODEL AND SUPPOR...IJNSA Journal
Prognostic of future health state relies on the estimation of the Remaining Useful Life (RUL) of physical
systems or components based on their current health state. RUL can be estimated by using three main
approaches: model-based, experience-based and data-driven approaches. This paper deals with a datadriven
prognostics method which is based on the transformation of the data provided by the sensors into
models that are able to characterize the behavior of the degradation of bearings.
For this purpose, we used Support Vector Machine (SVM) as modeling tool. The experiments on the
recently published data base taken from the platform PRONOSTIA clearly show the superiority of the
proposed approach compared to well established method in literature like Mixture of Gaussian Hidden
Markov Models (MoG-HMMs).
Comparison of Neural Network Training Functions for Hematoma Classification i...IOSR Journals
Classification is one of the most important task in application areas of artificial neural networks
(ANN).Training neural networks is a complex task in the supervised learning field of research. The main
difficulty in adopting ANN is to find the most appropriate combination of learning, transfer and training
function for the classification task. We compared the performances of three types of training algorithms in feed
forward neural network for brain hematoma classification. In this work we have selected Gradient Descent
based backpropagation, Gradient Descent with momentum, Resilence backpropogation algorithms. Under
conjugate based algorithms, Scaled Conjugate back propagation, Conjugate Gradient backpropagation with
Polak-Riebreupdates(CGP) and Conjugate Gradient backpropagation with Fletcher-Reeves updates (CGF).The
last category is Quasi Newton based algorithm, under this BFGS, Levenberg-Marquardt algorithms are
selected. Proposed work compared training algorithm on the basis of mean square error, accuracy, rate of
convergence and correctness of the classification. Our conclusion about the training functions is based on the
simulation results
Multistage Classification of Alzheimer’s DiseaseIJLT EMAS
Alzheimer’s disease is a type of dementia that destroys
memory and other mental functions. During the progression of
the disease certain proteins called plaques and tangles get
deposited in hippocampus which is located in the temporal lobe
of brain. The disease is not a normal part of aging and gets
worsen over time. Medical imaging techniques like Magnetic
Resonance Imaging (MRI), Computed Tomography (CT) and
Positron Emission Tomography (PET) play significant role in the
disease diagnosis. In this paper, we propose a method for
classifying MRI into Normal Control (NC), Mild Cognitive
Impairment (MCI) and Alzheimer’s Disease(AD). An overall
outline of the methodology includes textural feature extraction,
feature reduction process and classification of the images into
various stages. Classification has been performed with three
classifiers namely Support Vector Machine (SVM), Artificial
Neural Network (ANN) and k-Nearest Neighbours (k-NN)
Chronic Kidney Disease prediction is one of the most important issues in healthcare analytics. The most interesting and challenging tasks in day to day life is prediction in medical field. In this paper, we employ some machine learning techniques for predicting the chronic kidney disease using clinical data. We use three machine learning algorithms such as Decision Tree(DT) algorithm, Naive Bayesian (NB) algorithm. The performance of the above models are compared with each other in order to select the best classifier in predicting the chronic kidney disease for given dataset.
STRATEGY FOR ELECTROMYOGRAPHY BASED DIAGNOSIS OF NEUROMUSCULAR DISEASES FOR A...ijbbjournal
Assistive Rehabilitation aims at developing procedures and therapies which reinstate lost body functions
for individuals with disabilities. Researchers have monitored electrophysiological activity of muscles using
biofeedback obtained from Electromyogram signals collected at appropriate innervation points. In this
paper, we present a comprehensive technique for detection of neuromuscular disease in a subject and a
strategy for continuous therapeutic assessment using the Rehabilitation Assessment Matrix. The decision
making tool has been trained using a wide spectrum of synthetic physiological data incorporating varying
degrees of myopathy and neuropathy from beginning stages to acute. The statistical, spectral and cepstral
features extracted from EMG have been used to train a Cascade Correlation Neural Network Classifier
for disease assessment. The diagnostic yield of the classifier is 91.2% accuracy, 85.3% specificity and
91.35% sensitivity. The strategy has also been extended to include isotonic contractions in addition to
static isometric contractions. This comprehensive strategy is proposed to aid physicians plan and schedule
treatment procedures to maximize the therapeutic value of the rehabilitation process.
Automated segmentation and classification technique for brain strokeIJECEIAES
Difussion-Weighted Imaging (DWI) plays an important role in the diagnosis of brain stroke by providing detailed information regarding the soft tissue contrast in the brain organ. Conventionally, the differential diagnosis of brain stroke lesions is performed manually by professional neuroradiologists during a highly subjective and time- consuming process. This study proposes a segmentation and classification technique to detect brain stroke lesions based on diffusion-weighted imaging (DWI). The type of stroke lesions consists of acute ischemic, sub-acute ischemic, chronic ischemic and acute hemorrhage. For segmentation, fuzzy c-Means (FCM) and active contour is proposed to segment the lesion’s region. FCM is implemented with active contour to separate the cerebral spinal fluid (CSF) with the hypointense lesion. Pre-processing is applied to the DWI for image normalization, background removal and image enhancement. The algorithm performance has been evaluated using Jaccard Index, Dice Coefficient (DC) and both false positive rate (FPR) and false negative rate (FNR). The average results for the Jaccard index, DC, FPR and FNR are 0.55, 0.68, 0.23 and 0.23, respectively. First statistical order method is applied to the segmentation result to obtain the features for the classifier input. For classification technique, bagged tree classifier is proposed to classify the type of stroke. The accuracy results for the classification is 90.8%. Based on the results, the proposed technique has potential to segment and classify brain stroke lesion from DWI images.
A Learning Linguistic Teaching Control for a Multi-Area Electric Power SystemCSCJournals
This paper presents a new methodology for designing a neuro-fuzzy control for complex physical systems. By developing a Neural -Fuzzy system learning with linguistic teaching signals. The advantage of this technique is that, produce a simple and well-performing system because it selects the fuzzy sets and the numerical numbers and process both numerical and linguistic information. This approach is able to process and learn numerical information as well as linguistic information. The proposed control scheme is applied to a multi-area power system with hydraulic and thermal turbines.
fMRI Segmentation Using Echo State Neural NetworkCSCJournals
This research work proposes a new intelligent segmentation technique for functional Magnetic Resonance Imaging (fMRI). It has been implemented using an Echostate Neural Network (ESN). Segmentation is an important process that helps in identifying objects of the image. Existing segmentation methods are not able to exactly segment the complicated profile of the fMRI accurately. Segmentation of every pixel in the fMRI correctly helps in proper location of tumor. The presence of noise and artifacts poses a challenging problem in proper segmentation. The proposed ESN is an estimation method with energy minimization. The estimation property helps in better segmentation of the complicated profile of the fMRI. The performance of the new segmentation method is found to be better with higher peak signal to noise ratio (PSNR) of 61 when compared to the PSNR of the existing back-propagation algorithm (BPA) segmentation method which is 57.
A NOVEL APPROACH FOR FEATURE EXTRACTION AND SELECTION ON MRI IMAGES FOR BRAIN...cscpconf
Feature extraction is a method of capturing visual content of an image. The feature extraction is
the process to represent raw image in its reduced form to facilitate decision making such as
pattern classification. The objective of this paper is to present a novel method of feature
selection and extraction. This approach combines the Intensity, Texture, shape based features
and classifies the tumor as white matter, Gray matter, CSF, abnormal and normal area. The
experiment is performed on 140 tumor contained brain MR images from the Internet Brain
Segmentation Repository. PCA and Linear Discriminant Analysis (LDA) were applied on the
training sets. The Support Vector Machine (SVM) classifier served as a comparison of
nonlinear techniques Vs linear ones. PCA and LDA methods are used to reduce the number of
features used. The feature selection using the proposed technique is more beneficial as it
analyses the data according to grouping class variable and gives reduced feature set with high classification accuracy.
Cerebral infarction classification using multiple support vector machine with...journalBEEI
Stroke ranks the third leading cause of death in the world after heart disease and cancer. It also occupies the first position as a disease that causes both mild and severe disability. The most common type of stroke is cerebral infarction, which increases every year in Indonesia. This disease does not only occur in the elderly, but in young and productive people which makes early detection very important. Although there are varied of medical methods used to classify cerebral infarction, this study uses a multiple support vector machine with information gain feature selection (MSVM-IG). MSVM-IG is a modification among IG Feature Selection and SVM, where SVM conducted doubly in the process of classification which utilizes the support vector as a new dataset. The data obtained from Cipto Mangunkusumo Hospital, Jakarta. Based on the results, the proposed method was able to achieve an accuracy value of 81%, therefore, this method can be considered to use for better classification result.
Multilayer extreme learning machine for hand movement prediction based on ele...journalBEEI
Brain computer interface (BCI) technology connects humans with machines via electroencephalography (EEG). The mechanism of BCI is pattern recognition, which proceeds by feature extraction and classification. Various feature extraction and classification methods can differentiate human motor movements, especially those of the hand. Combinations of these methods can greatly improve the accuracy of the results. This article explores the performances of nine feature-extraction types computed by a multilayer extreme learning machine (ML-ELM). The proposed method was tested on different numbers of EEG channels and different ML-ELM structures. Moreover, the performance of ML-ELM was compared with those of ELM, Support Vector Machine and Naive Bayes in classifying real and imaginary hand movements in offline mode. The ML-ELM with discrete wavelet transform (DWT) as feature extraction outperformed the other classification methods with highest accuracy 0.98. So, the authors also found that the structures influenced the accuracy of ML-ELM for different task, feature extraction used and channel used.
AN EFFICIENT PSO BASED ENSEMBLE CLASSIFICATION MODEL ON HIGH DIMENSIONAL DATA...ijsc
As the size of the biomedical databases are growing day by day, finding an essential features in the disease prediction have become more complex due to high dimensionality and sparsity problems. Also, due to the
availability of a large number of micro-array datasets in the biomedical repositories, it is difficult to analyze, predict and interpret the feature information using the traditional feature selection based classification models. Most of the traditional feature selection based classification algorithms have computational issues such as dimension reduction, uncertainty and class imbalance on microarray datasets. Ensemble classifier is one of the scalable models for extreme learning machine due to its high efficiency, the fast processing speed for real-time applications. The main objective of the feature selection
based ensemble learning models is to classify the high dimensional data with high computational efficiency
and high true positive rate on high dimensional datasets. In this proposed model an optimized Particle swarm optimization (PSO) based Ensemble classification model was developed on high dimensional microarray
datasets. Experimental results proved that the proposed model has high computational efficiency compared to the traditional feature selection based classification models in terms of accuracy , true positive rate and error rate are concerned.
STRATEGY FOR ELECTROMYOGRAPHY BASED DIAGNOSIS OF NEUROMUSCULAR DISEASES FOR A...ijbbjournal
Assistive Rehabilitation aims at developing procedures and therapies which reinstate lost body functions
for individuals with disabilities. Researchers have monitored electrophysiological activity of muscles using
biofeedback obtained from Electromyogram signals collected at appropriate innervation points. In this
paper, we present a comprehensive technique for detection of neuromuscular disease in a subject and a
strategy for continuous therapeutic assessment using the Rehabilitation Assessment Matrix. The decision
making tool has been trained using a wide spectrum of synthetic physiological data incorporating varying
degrees of myopathy and neuropathy from beginning stages to acute. The statistical, spectral and cepstral
features extracted from EMG have been used to train a Cascade Correlation Neural Network Classifier
for disease assessment. The diagnostic yield of the classifier is 91.2% accuracy, 85.3% specificity and
91.35% sensitivity. The strategy has also been extended to include isotonic contractions in addition to
static isometric contractions. This comprehensive strategy is proposed to aid physicians plan and schedule
treatment procedures to maximize the therapeutic value of the rehabilitation process.
Automated segmentation and classification technique for brain strokeIJECEIAES
Difussion-Weighted Imaging (DWI) plays an important role in the diagnosis of brain stroke by providing detailed information regarding the soft tissue contrast in the brain organ. Conventionally, the differential diagnosis of brain stroke lesions is performed manually by professional neuroradiologists during a highly subjective and time- consuming process. This study proposes a segmentation and classification technique to detect brain stroke lesions based on diffusion-weighted imaging (DWI). The type of stroke lesions consists of acute ischemic, sub-acute ischemic, chronic ischemic and acute hemorrhage. For segmentation, fuzzy c-Means (FCM) and active contour is proposed to segment the lesion’s region. FCM is implemented with active contour to separate the cerebral spinal fluid (CSF) with the hypointense lesion. Pre-processing is applied to the DWI for image normalization, background removal and image enhancement. The algorithm performance has been evaluated using Jaccard Index, Dice Coefficient (DC) and both false positive rate (FPR) and false negative rate (FNR). The average results for the Jaccard index, DC, FPR and FNR are 0.55, 0.68, 0.23 and 0.23, respectively. First statistical order method is applied to the segmentation result to obtain the features for the classifier input. For classification technique, bagged tree classifier is proposed to classify the type of stroke. The accuracy results for the classification is 90.8%. Based on the results, the proposed technique has potential to segment and classify brain stroke lesion from DWI images.
A Learning Linguistic Teaching Control for a Multi-Area Electric Power SystemCSCJournals
This paper presents a new methodology for designing a neuro-fuzzy control for complex physical systems. By developing a Neural -Fuzzy system learning with linguistic teaching signals. The advantage of this technique is that, produce a simple and well-performing system because it selects the fuzzy sets and the numerical numbers and process both numerical and linguistic information. This approach is able to process and learn numerical information as well as linguistic information. The proposed control scheme is applied to a multi-area power system with hydraulic and thermal turbines.
fMRI Segmentation Using Echo State Neural NetworkCSCJournals
This research work proposes a new intelligent segmentation technique for functional Magnetic Resonance Imaging (fMRI). It has been implemented using an Echostate Neural Network (ESN). Segmentation is an important process that helps in identifying objects of the image. Existing segmentation methods are not able to exactly segment the complicated profile of the fMRI accurately. Segmentation of every pixel in the fMRI correctly helps in proper location of tumor. The presence of noise and artifacts poses a challenging problem in proper segmentation. The proposed ESN is an estimation method with energy minimization. The estimation property helps in better segmentation of the complicated profile of the fMRI. The performance of the new segmentation method is found to be better with higher peak signal to noise ratio (PSNR) of 61 when compared to the PSNR of the existing back-propagation algorithm (BPA) segmentation method which is 57.
A NOVEL APPROACH FOR FEATURE EXTRACTION AND SELECTION ON MRI IMAGES FOR BRAIN...cscpconf
Feature extraction is a method of capturing visual content of an image. The feature extraction is
the process to represent raw image in its reduced form to facilitate decision making such as
pattern classification. The objective of this paper is to present a novel method of feature
selection and extraction. This approach combines the Intensity, Texture, shape based features
and classifies the tumor as white matter, Gray matter, CSF, abnormal and normal area. The
experiment is performed on 140 tumor contained brain MR images from the Internet Brain
Segmentation Repository. PCA and Linear Discriminant Analysis (LDA) were applied on the
training sets. The Support Vector Machine (SVM) classifier served as a comparison of
nonlinear techniques Vs linear ones. PCA and LDA methods are used to reduce the number of
features used. The feature selection using the proposed technique is more beneficial as it
analyses the data according to grouping class variable and gives reduced feature set with high classification accuracy.
Cerebral infarction classification using multiple support vector machine with...journalBEEI
Stroke ranks the third leading cause of death in the world after heart disease and cancer. It also occupies the first position as a disease that causes both mild and severe disability. The most common type of stroke is cerebral infarction, which increases every year in Indonesia. This disease does not only occur in the elderly, but in young and productive people which makes early detection very important. Although there are varied of medical methods used to classify cerebral infarction, this study uses a multiple support vector machine with information gain feature selection (MSVM-IG). MSVM-IG is a modification among IG Feature Selection and SVM, where SVM conducted doubly in the process of classification which utilizes the support vector as a new dataset. The data obtained from Cipto Mangunkusumo Hospital, Jakarta. Based on the results, the proposed method was able to achieve an accuracy value of 81%, therefore, this method can be considered to use for better classification result.
Multilayer extreme learning machine for hand movement prediction based on ele...journalBEEI
Brain computer interface (BCI) technology connects humans with machines via electroencephalography (EEG). The mechanism of BCI is pattern recognition, which proceeds by feature extraction and classification. Various feature extraction and classification methods can differentiate human motor movements, especially those of the hand. Combinations of these methods can greatly improve the accuracy of the results. This article explores the performances of nine feature-extraction types computed by a multilayer extreme learning machine (ML-ELM). The proposed method was tested on different numbers of EEG channels and different ML-ELM structures. Moreover, the performance of ML-ELM was compared with those of ELM, Support Vector Machine and Naive Bayes in classifying real and imaginary hand movements in offline mode. The ML-ELM with discrete wavelet transform (DWT) as feature extraction outperformed the other classification methods with highest accuracy 0.98. So, the authors also found that the structures influenced the accuracy of ML-ELM for different task, feature extraction used and channel used.
AN EFFICIENT PSO BASED ENSEMBLE CLASSIFICATION MODEL ON HIGH DIMENSIONAL DATA...ijsc
As the size of the biomedical databases are growing day by day, finding an essential features in the disease prediction have become more complex due to high dimensionality and sparsity problems. Also, due to the
availability of a large number of micro-array datasets in the biomedical repositories, it is difficult to analyze, predict and interpret the feature information using the traditional feature selection based classification models. Most of the traditional feature selection based classification algorithms have computational issues such as dimension reduction, uncertainty and class imbalance on microarray datasets. Ensemble classifier is one of the scalable models for extreme learning machine due to its high efficiency, the fast processing speed for real-time applications. The main objective of the feature selection
based ensemble learning models is to classify the high dimensional data with high computational efficiency
and high true positive rate on high dimensional datasets. In this proposed model an optimized Particle swarm optimization (PSO) based Ensemble classification model was developed on high dimensional microarray
datasets. Experimental results proved that the proposed model has high computational efficiency compared to the traditional feature selection based classification models in terms of accuracy , true positive rate and error rate are concerned.
Comparison of Image Segmentation Algorithms for Brain Tumor DetectionIJMTST Journal
This paper deals with the implementation of Simple Algorithms for detection of size and shape of tumor in brain using MRI images. Generally, CT scan or MRI that is directed into intracranial cavity produces a complete image of brain. This image is visually examined by the physician for detection & diagnosis of brain tumor. However this method of detection resists the accurate determination of stage & size of tumor. To avoid that, this project uses computer aided method for segmentation (detection) of brain tumor by applying Fuzzy C-Means, K-Means, Gaussian Kernel and Pillar K-means algorithms. This segmentation process includes a new mechanism for clustering the elements of high-resolution images in order to improve precision and reduce computation time. The system applies FCM, Gaussian kernel and K-means clustering to the image later optimized by Pillar Algorithm. It designates the initial centroids’ positions by calculating the Euclidian distance metric between each data point and all previous centroids. Then it selects data points which have the maximum distance as new initial centroids. This algorithm distributes all initial centroids according to the maximum accumulated distance metric. In addition, it also reduces the time for analysis. At the end of the process the tumor is extracted from the MRI image and its exact position and the shape is also determined. This paper evaluates the proposed approach for Brain tumor detection by comparing with K-means, Fuzzy C means, Gaussian Kernel and manually segmented algorithms. The experimental results clarify the effectiveness of proposed approach to improve the segmentation quality in aspects of precision and computational time.
FUZZY SEGMENTATION OF MRI CEREBRAL TISSUE USING LEVEL SET ALGORITHMAM Publications
The current study investigated a median filter with the fuzzy level set method to propose fuzzy segmentation of magnetic resonance imaging (MRI) cerebral tissue images. An MRI image was used as an input image. A median filter and fuzzy c-means (FCM) clustering were utilized to remove image noise and create image clusters, respectively. The image clusters showed initial and final cluster centers. The level set method was then used for segmentation after separating and extracting white matter from gray matter. Fuzzy c-means was sensitive to the choice of the initial cluster center. Improper center selection caused the method to produce suboptimal solutions. The proposed algorithm was successfully utilized to segment MRI cerebral tissue images. The algorithm efficiently performed segmentation of test MRI cerebral tissue images compared with algorithms proposed in previous studies.
Geometric Correction for Braille Document Images csandit
Image processing is an important research area in computer vision. clustering is an unsupervised
study. clustering can also be used for image segmentation. there exist so many methods for image
segmentation. image segmentation plays an important role in image analysis.it is one of the first
and the most important tasks in image analysis and computer vision. this proposed system
presents a variation of fuzzy c-means algorithm that provides image clustering. the kernel fuzzy
c-means clustering algorithm (kfcm) is derived from the fuzzy c-means clustering
algorithm(fcm).the kfcm algorithm that provides image clustering and improves accuracy
significantly compared with classical fuzzy c-means algorithm. the new algorithm is called
gaussian kernel based fuzzy c-means clustering algorithm (gkfcm)the major characteristic of
gkfcm is the use of a fuzzy clustering approach ,aiming to guarantee noise insensitiveness and
image detail preservation.. the objective of the work is to cluster the low intensity in homogeneity
area from the noisy images, using the clustering method, segmenting that portion separately using
content level set approach. the purpose of designing this system is to produce better segmentation
results for images corrupted by noise, so that it can be useful in various fields like medical image
analysis, such as tumor detection, study of anatomical structure, and treatment planning.
GAUSSIAN KERNEL BASED FUZZY C-MEANS CLUSTERING ALGORITHM FOR IMAGE SEGMENTATIONcscpconf
Image processing is an important research area in computer vision. clustering is an unsupervised study. clustering can also be used for image segmentation. there exist so many methods for image segmentation. image segmentation plays an important role in image analysis.it is one of the first and the most important tasks in image analysis and computer vision. this proposed system presents a variation of fuzzy c-means algorithm that provides image clustering. the kernel fuzzy
c-means clustering algorithm (kfcm) is derived from the fuzzy c-means clustering algorithm(fcm).the kfcm algorithm that provides image clustering and improves accuracy significantly compared with classical fuzzy c-means algorithm. the new algorithm is called gaussian kernel based fuzzy c-means clustering algorithm (gkfcm)the major characteristic of gkfcm is the use of a fuzzy clustering approach ,aiming to guarantee noise insensitiveness and image detail preservation.. the objective of the work is to cluster the low intensity in homogeneity area from the noisy images, using the clustering method, segmenting that portion separately using content level set approach. the purpose of designing this system is to produce better segmentation results for images corrupted by noise, so that it can be useful in various fields like medical image analysis, such as tumor detection, study of anatomical structure, and treatment planning.
GAUSSIAN KERNEL BASED FUZZY C-MEANS CLUSTERING ALGORITHM FOR IMAGE SEGMENTATIONcsandit
Image processing is an important research area in computer vision. clustering is an unsupervised
study. clustering can also be used for image segmentation. there exist so many methods for image
segmentation. image segmentation plays an important role in image analysis.it is one of the first
and the most important tasks in image analysis and computer vision. this proposed system
presents a variation of fuzzy c-means algorithm that provides image clustering. the kernel fuzzy
c-means clustering algorithm (kfcm) is derived from the fuzzy c-means clustering
algorithm(fcm).the kfcm algorithm that provides image clustering and improves accuracy
significantly compared with classical fuzzy c-means algorithm. the new algorithm is called
gaussian kernel based fuzzy c-means clustering algorithm (gkfcm)the major characteristic of
gkfcm is the use of a fuzzy clustering approach ,aiming to guarantee noise insensitiveness and
image detail preservation.. the objective of the work is to cluster the low intensity in homogeneity
area from the noisy images, using the clustering method, segmenting that portion separately using
content level set approach. the purpose of designing this system is to produce better segmentation
results for images corrupted by noise, so that it can be useful in various fields like medical image
analysis, such as tumor detection, study of anatomical structure, and treatment planning.
Segmentation of Brain MR Images for Tumor Extraction by Combining Kmeans Clus...CSCJournals
Segmentation of images holds an important position in the area of image processing. It becomes more important while typically dealing with medical images where pre-surgery and post surgery decisions are required for the purpose of initiating and speeding up the recovery process [5] Computer aided detection of abnormal growth of tissues is primarily motivated by the necessity of achieving maximum possible accuracy. Manual segmentation of these abnormal tissues cannot be compared with modern day’s high speed computing machines which enable us to visually observe the volume and location of unwanted tissues. A well known segmentation problem within MRI is the task of labeling voxels according to their tissue type which include White Matter (WM), Grey Matter (GM) , Cerebrospinal Fluid (CSF) and sometimes pathological tissues like tumor etc. This paper describes an efficient method for automatic brain tumor segmentation for the extraction of tumor tissues from MR images. It combines Perona and Malik anisotropic diffusion model for image enhancement and Kmeans clustering technique for grouping tissues belonging to a specific group. The proposed method uses T1, T2 and PD weighted gray level intensity images. The proposed technique produced appreciative results
بعض (وليس الكل) ملخصات الأبحاث الجيدة المنشورة فى بعض المجلات الجيدة وفيها تنوع من الافكار الابحاث الابتكارية التى يخدم فيها علوم الحاسبات فيها - انها تطبيقات حياتية
Quantitative Comparison of Artificial Honey Bee Colony Clustering and Enhance...idescitation
This paper introduces a comparison of two popular
clustering algorithms for breast DCE-MRI segmentation
purpose. Magnetic resonance imaging (MRI) is an advanced
medical imaging technique providing rich information about
the human soft tissue anatomy. The goal of breast magnetic
resonance image segmentation is to accurately identify the
principal mass or lesion structures in these image volumes.
There are many methods that exist to segment the breast
DCE-MR images. One of these, K-means clustering procedure
provides effective solutions in many science and engineering
fields. They are especially popular in the pattern classification
and signal processing areas and can segment the breast DCE-
MRI with high precision. The artificial bee colony (ABC)
algorithm is a new, very simple and robust population based
optimization algorithm that is inspired by the intelligent
behavior of honey bee swarms. This paper compares the
performance of two image segmentation techniques in the
subject of breast DCE-MR image. In the experiments, the
real dynamic contrast enhanced magnetic resonance images
(DCE- MRI) are used. Results show that artificial bee colony
algorithm performs better in terms of segmentation accuracy,
robustness and speed of computation.
Automatic Diagnosis of Abnormal Tumor Region from Brain Computed Tomography I...ijcseit
The research work presented in this paper is to achieve the tissue classification and automatically
diagnosis the abnormal tumor region present in Computed Tomography (CT) images using the wavelet
based statistical texture analysis method. Comparative studies of texture analysis method are performed
for the proposed wavelet based texture analysis method and Spatial Gray Level Dependence Method
(SGLDM). Our proposed system consists of four phases i) Discrete Wavelet Decomposition (ii)
Feature extraction (iii) Feature selection (iv) Analysis of extracted texture features by classifier. A
wavelet based statistical texture feature set is derived from normal and tumor regions. Genetic Algorithm
(GA) is used to select the optimal texture features from the set of extracted texture features. We construct
the Support Vector Machine (SVM) based classifier and evaluate the performance of classifier by
comparing the classification results of the SVM based classifier with the Back Propagation Neural network
classifier(BPN). The results of Support Vector Machine (SVM), BPN classifiers for the texture analysis
methods are evaluated using Receiver Operating Characteristic (ROC) analysis. Experimental results
show that the classification accuracy of SVM is 96% for 10 fold cross validation method. The system
has been tested with a number of real Computed Tomography brain images and has achieved satisfactory
results.
Similar to Brain Tissues Segmentation in MR Images based on Level Set Parameters Improvement (20)
OPTIMIZING SIMILARITY THRESHOLD FOR ABSTRACT SIMILARITY METRIC IN SPEECH DIAR...mathsjournal
Speaker diarization is a critical task in speech processing that aims to identify "who spoke when?" in an
audio or video recording that contains unknown amounts of speech from unknown speakers and unknown
number of speakers. Diarization has numerous applications in speech recognition, speaker identification,
and automatic captioning. Supervised and unsupervised algorithms are used to address speaker diarization
problems, but providing exhaustive labeling for the training dataset can become costly in supervised
learning, while accuracy can be compromised when using unsupervised approaches. This paper presents a
novel approach to speaker diarization, which defines loosely labeled data and employs x-vector embedding
and a formalized approach for threshold searching with a given abstract similarity metric to cluster
temporal segments into unique user segments. The proposed algorithm uses concepts of graph theory,
matrix algebra, and genetic algorithm to formulate and solve the optimization problem. Additionally, the
algorithm is applied to English, Spanish, and Chinese audios, and the performance is evaluated using wellknown similarity metrics. The results demonstrate that the robustness of the proposed approach. The
findings of this research have significant implications for speech processing, speaker identification
including those with tonal differences. The proposed method offers a practical and efficient solution for
speaker diarization in real-world scenarios where there are labeling time and cost constraints.
A POSSIBLE RESOLUTION TO HILBERT’S FIRST PROBLEM BY APPLYING CANTOR’S DIAGONA...mathsjournal
We present herein a new approach to the Continuum hypothesis CH. We will employ a string conditioning,
a technique that limits the range of a string over some of its sub-domains for forming subsets K of R. We
will prove that these are well defined and in fact proper subsets of R by making use of Cantor’s Diagonal
argument in its original form to establish the cardinality of K between that of (N,R) respectively
A Positive Integer 𝑵 Such That 𝒑𝒏 + 𝒑𝒏+𝟑 ~ 𝒑𝒏+𝟏 + 𝒑𝒏+𝟐 For All 𝒏 ≥ 𝑵mathsjournal
According to Bertrand's postulate, we have 𝑝𝑛 + 𝑝𝑛 ≥ 𝑝𝑛+1. Is it true that for all 𝑛 > 1 then 𝑝𝑛−1 + 𝑝𝑛 ≥𝑝𝑛+1? Then 𝑝𝑛 + 𝑝𝑛+3 > 𝑝𝑛+1 + 𝑝𝑛+2where 𝑛 ≥ 𝑁, 𝑁 is a large enough value?
A POSSIBLE RESOLUTION TO HILBERT’S FIRST PROBLEM BY APPLYING CANTOR’S DIAGONA...mathsjournal
We present herein a new approach to the Continuum hypothesis CH. We will employ a string conditioning,
a technique that limits the range of a string over some of its sub-domains for forming subsets K of R. We
will prove that these are well defined and in fact proper subsets of R by making use of Cantor’s Diagonal
argument in its original form to establish the cardinality of K between that of (N,R) respectively.
Moving Target Detection Using CA, SO and GO-CFAR detectors in Nonhomogeneous ...mathsjournal
systems in complex situations. A fundamental problem in radar systems is to automatically detect targets while maintaining a
desired constant false alarm probability. This work studies two detection approaches, the first with a fixed threshold and the
other with an adaptive one. In the latter, we have learned the three types of detectors CA, SO, and GO-CFAR. This research
aims to apply intelligent techniques to improve detection performance in a nonhomogeneous environment using standard
CFAR detectors. The objective is to maintain the false alarm probability and enhance target detection by combining
intelligent techniques. With these objectives in mind, implementing standard CFAR detectors is applied to nonhomogeneous
environment data. The primary focus is understanding the reason for the false detection when applying standard CFAR
detectors in a nonhomogeneous environment and how to avoid it using intelligent approaches.
OPTIMIZING SIMILARITY THRESHOLD FOR ABSTRACT SIMILARITY METRIC IN SPEECH DIAR...mathsjournal
Speaker diarization is a critical task in speech processing that aims to identify "who spoke when?" in an
audio or video recording that contains unknown amounts of speech from unknown speakers and unknown
number of speakers. Diarization has numerous applications in speech recognition, speaker identification,
and automatic captioning. Supervised and unsupervised algorithms are used to address speaker diarization
problems, but providing exhaustive labeling for the training dataset can become costly in supervised
learning, while accuracy can be compromised when using unsupervised approaches. This paper presents a
novel approach to speaker diarization, which defines loosely labeled data and employs x-vector embedding
and a formalized approach for threshold searching with a given abstract similarity metric to cluster
temporal segments into unique user segments. The proposed algorithm uses concepts of graph theory,
matrix algebra, and genetic algorithm to formulate and solve the optimization problem. Additionally, the
algorithm is applied to English, Spanish, and Chinese audios, and the performance is evaluated using wellknown similarity metrics. The results demonstrate that the robustness of the proposed approach. The
findings of this research have significant implications for speech processing, speaker identification
including those with tonal differences. The proposed method offers a practical and efficient solution for
speaker diarization in real-world scenarios where there are labeling time and cost constraints.
The Impact of Allee Effect on a Predator-Prey Model with Holling Type II Func...mathsjournal
There is currently much interest in predator–prey models across a variety of bioscientific disciplines. The focus is on quantifying predator–prey interactions, and this quantification is being formulated especially as regards climate change. In this article, a stability analysis is used to analyse the behaviour of a general two-species model with respect to the Allee effect (on the growth rate and nutrient limitation level of the prey population). We present a description of the local and non-local interaction stability of the model and detail the types of bifurcation which arise, proving that there is a Hopf bifurcation in the Allee effect module. A stable periodic oscillation was encountered which was due to the Allee effect on the
prey species. As a result of this, the positive equilibrium of the model could change from stable to unstable and then back to stable, as the strength of the Allee effect (or the ‘handling’ time taken by predators when predating) increased continuously from zero. Hopf bifurcation has arose yield some complex patterns that have not been observed previously in predator-prey models, and these, at the same time, reflect long term behaviours. These findings have significant implications for ecological studies, not least with respect to examining the mobility of the two species involved in the non-local domain using Turing instability. A spiral generated by local interaction (reflecting the instability that forms even when an infinitely large
carrying capacity is assumed) is used in the model.
A POSSIBLE RESOLUTION TO HILBERT’S FIRST PROBLEM BY APPLYING CANTOR’S DIAGONA...mathsjournal
We present herein a new approach to the Continuum hypothesis CH. We will employ a string conditioning,a technique that limits the range of a string over some of its sub-domains for forming subsets K of R. We will prove that these are well defined and in fact proper subsets of R by making use of Cantor’s Diagonal argument in its original form to establish the cardinality of K between that of (N,R) respectively.
Moving Target Detection Using CA, SO and GO-CFAR detectors in Nonhomogeneous ...mathsjournal
Modernization of radar technology and improved signal processing techniques are necessary to improve detection systems in complex situations. A fundamental problem in radar systems is to automatically detect targets while maintaining a
desired constant false alarm probability. This work studies two detection approaches, the first with a fixed threshold and the
other with an adaptive one. In the latter, we have learned the three types of detectors CA, SO, and GO-CFAR. This research
aims to apply intelligent techniques to improve detection performance in a nonhomogeneous environment using standard
CFAR detectors. The objective is to maintain the false alarm probability and enhance target detection by combining
intelligent techniques. With these objectives in mind, implementing standard CFAR detectors is applied to nonhomogeneous
environment data. The primary focus is understanding the reason for the false detection when applying standard CFAR
detectors in a nonhomogeneous environment and how to avoid it using intelligent approaches
OPTIMIZING SIMILARITY THRESHOLD FOR ABSTRACT SIMILARITY METRIC IN SPEECH DIAR...mathsjournal
Speaker diarization is a critical task in speech processing that aims to identify "who spoke when?" in an
audio or video recording that contains unknown amounts of speech from unknown speakers and unknown
number of speakers. Diarization has numerous applications in speech recognition, speaker identification,
and automatic captioning. Supervised and unsupervised algorithms are used to address speaker diarization
problems, but providing exhaustive labeling for the training dataset can become costly in supervised
learning, while accuracy can be compromised when using unsupervised approaches. This paper presents a
novel approach to speaker diarization, which defines loosely labeled data and employs x-vector embedding
and a formalized approach for threshold searching with a given abstract similarity metric to cluster
temporal segments into unique user segments. The proposed algorithm uses concepts of graph theory,
matrix algebra, and genetic algorithm to formulate and solve the optimization problem. Additionally, the
algorithm is applied to English, Spanish, and Chinese audios, and the performance is evaluated using wellknown similarity metrics. The results demonstrate that the robustness of the proposed approach. The
findings of this research have significant implications for speech processing, speaker identification
including those with tonal differences. The proposed method offers a practical and efficient solution for
speaker diarization in real-world scenarios where there are labeling time and cost constraints
Modified Alpha-Rooting Color Image Enhancement Method on the Two Side 2-D Qua...mathsjournal
Color in an image is resolved to 3 or 4 color components and 2-Dimages of these components are stored in separate channels. Most of the color image enhancement algorithms are applied channel-by-channel on each image. But such a system of color image processing is not processing the original color. When a color image is represented as a quaternion image, processing is done in original colors. This paper proposes an implementation of the quaternion approach of enhancement algorithm for enhancing color images and is referred as the modified alpha-rooting by the two-dimensional quaternion discrete Fourier transform (2-D QDFT). Enhancement results of this proposed method are compared with the channel-by-channel image enhancement by the 2-D DFT. Enhancements in color images are quantitatively measured by the color enhancement measure estimation (CEME), which allows for selecting optimum parameters for processing by thegenetic algorithm. Enhancement of color images by the quaternion based method allows for obtaining images which are closer to the genuine representation of the real original color.
An Application of Assignment Problem in Laptop Selection Problem Using MATLABmathsjournal
The assignment – selection problem used to find one-to- one match of given “Users” to “Laptops”, the main objective is to minimize the cost as per user requirement. This paper presents satisfactory solution for real assignment – Laptop selection problem using MATLAB coding.
The aim of this paper is to study the class of β-normal spaces. The relationships among s-normal spaces, pnormal spaces and β-normal spaces are investigated. Moreover, we study the forms of generalized β-closed
functions. We obtain characterizations of β-normal spaces, properties of the forms of generalized β-closed
functions and preservation theorems.
Cubic Response Surface Designs Using Bibd in Four Dimensionsmathsjournal
Response Surface Methodology (RSM) has applications in Chemical, Physical, Meteorological, Industrial and Biological fields. The estimation of slope response surface occurs frequently in practical situations for the experimenter. The rates of change of the response surface, like rates of change in the yield of crop to various fertilizers, to estimate the rates of change in chemical experiments etc. are of
interest. If the fit of second order response is inadequate for the design points, we continue the
experiment so as to fit a third order response surface. Higher order response surface designs are sometimes needed in Industrial and Meteorological applications. Gardiner et al (1959) introduced third order rotatable designs for exploring response surface. Anjaneyulu et al (1994-1995) constructed third order slope rotatable designs using doubly balanced incomplete block designs. Anjaneyulu et al (2001)
introduced third order slope rotatable designs using central composite type design points. Seshu babu et al (2011) studied modified construction of third order slope rotatable designs using central composite
designs. Seshu babu et al (2014) constructed TOSRD using BIBD. In view of wide applicability of third
order models in RSM and importance of slope rotatability, we introduce A Cubic Slope Rotatable Designs Using BIBD in four dimensions.
The caustic that occur in geodesics in space-times which are solutions to the gravitational field equations with the energy-momentum tensor satisfying the dominant energy condition can be circumvented if quantum variations are allowed. An action is developed such that the variation yields the field equations
and the geodesic condition, and its quantization provides a method for determining the extent of the wave packet around the classical path.
Approximate Analytical Solution of Non-Linear Boussinesq Equation for the Uns...mathsjournal
For one dimensional homogeneous, isotropic aquifer, without accretion the governing Boussinesq equation under Dupuit assumptions is a nonlinear partial differential equation. In the present paper approximate analytical solution of nonlinear Boussinesq equation is obtained using Homotopy perturbation transform method(HPTM). The solution is compared with the exact solution. The comparison shows that the HPTM is efficient, accurate and reliable. The analysis of two important aquifer
parameters namely viz. specific yield and hydraulic conductivity is studied to see the effects on the height of water table. The results resemble well with the physical phenomena.
Common Fixed Point Theorems in Compatible Mappings of Type (P*) of Generalize...mathsjournal
In this paper, we give some new definition of Compatible mappings of type (P), type (P-1) and type (P-2) in intuitionistic generalized fuzzy metric spaces and prove Common fixed point theorems for six mappings
under the conditions of compatible mappings of type (P-1) and type (P-2) in complete intuitionistic fuzzy
metric spaces. Our results intuitionistically fuzzify the result of Muthuraj and Pandiselvi [15]
Mathematics subject classifications: 45H10, 54H25
A Probabilistic Algorithm for Computation of Polynomial Greatest Common with ...mathsjournal
In the earlier work, Knuth present an algorithm to decrease the coefficient growth in the Euclidean algorithm of polynomials called subresultant algorithm. However, the output polynomials may have a small factor which can be removed. Then later, Brown of Bell Telephone Laboratories showed the subresultant in another way by adding a variant called 𝜏 and gave a way to compute the variant. Nevertheless, the way failed to determine every 𝜏 correctly.
In this paper, we will give a probabilistic algorithm to determine the variant 𝜏 correctly in most cases by adding a few steps instead of computing 𝑡(𝑥) when given 𝑓(𝑥) and𝑔(𝑥) ∈ ℤ[𝑥], where 𝑡(𝑥) satisfies that 𝑠(𝑥)𝑓(𝑥) + 𝑡(𝑥)𝑔(𝑥) = 𝑟(𝑥), here 𝑡(𝑥), 𝑠(𝑥) ∈ ℤ[𝑥]
Table of Contents - September 2022, Volume 9, Number 2/3mathsjournal
Applied Mathematics and Sciences: An International Journal (MathSJ ) aims to publish original research papers and survey articles on all areas of pure mathematics, theoretical applied mathematics, mathematical physics, theoretical mechanics, probability and mathematical statistics, and theoretical biology. All articles are fully refereed and are judged by their contribution to advancing the state of the science of mathematics.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Welocme to ViralQR, your best QR code generator.ViralQR
Welcome to ViralQR, your best QR code generator available on the market!
At ViralQR, we design static and dynamic QR codes. Our mission is to make business operations easier and customer engagement more powerful through the use of QR technology. Be it a small-scale business or a huge enterprise, our easy-to-use platform provides multiple choices that can be tailored according to your company's branding and marketing strategies.
Our Vision
We are here to make the process of creating QR codes easy and smooth, thus enhancing customer interaction and making business more fluid. We very strongly believe in the ability of QR codes to change the world for businesses in their interaction with customers and are set on making that technology accessible and usable far and wide.
Our Achievements
Ever since its inception, we have successfully served many clients by offering QR codes in their marketing, service delivery, and collection of feedback across various industries. Our platform has been recognized for its ease of use and amazing features, which helped a business to make QR codes.
Our Services
At ViralQR, here is a comprehensive suite of services that caters to your very needs:
Static QR Codes: Create free static QR codes. These QR codes are able to store significant information such as URLs, vCards, plain text, emails and SMS, Wi-Fi credentials, and Bitcoin addresses.
Dynamic QR codes: These also have all the advanced features but are subscription-based. They can directly link to PDF files, images, micro-landing pages, social accounts, review forms, business pages, and applications. In addition, they can be branded with CTAs, frames, patterns, colors, and logos to enhance your branding.
Pricing and Packages
Additionally, there is a 14-day free offer to ViralQR, which is an exceptional opportunity for new users to take a feel of this platform. One can easily subscribe from there and experience the full dynamic of using QR codes. The subscription plans are not only meant for business; they are priced very flexibly so that literally every business could afford to benefit from our service.
Why choose us?
ViralQR will provide services for marketing, advertising, catering, retail, and the like. The QR codes can be posted on fliers, packaging, merchandise, and banners, as well as to substitute for cash and cards in a restaurant or coffee shop. With QR codes integrated into your business, improve customer engagement and streamline operations.
Comprehensive Analytics
Subscribers of ViralQR receive detailed analytics and tracking tools in light of having a view of the core values of QR code performance. Our analytics dashboard shows aggregate views and unique views, as well as detailed information about each impression, including time, device, browser, and estimated location by city and country.
So, thank you for choosing ViralQR; we have an offer of nothing but the best in terms of QR code services to meet business diversity!
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Brain Tissues Segmentation in MR Images based on Level Set Parameters Improvement
1. Applied Mathematics and Sciences: An International Journal (MathSJ ), Vol. 1, No. 3, December 2014
53
Brain Tissues Segmentation in MR Images based on
Level Set Parameters Improvement
Mohammad masoud Javidi1
andMajid Hajizade2
and Mehdi Jafari Shahbazi3
ShahidBahonarUnivresity of Kerman,Kerman,Iran.
1,2
Department of Computer Science ,Faculty of Mathematics and
Computer,ShahidBahonar University of Kerman,Kerman,Iran.
3
Department Of Electronic Engineering , College of Technical and Engineering ,
Kerman Branch , Islamic Azad University , Kerman , Iran
Abstract
this paper presents a new image processing technique for brain tissue segmentation, precisely,in order to
recognize brain diseases. Automatic level set(ALS) is a powerful method for segmenting brain tissues in
MR images that uses spatial Fuzzy C-Means (SFCM) to set initial contour near the object’s boundaries in
order to increasing the speed of algorithm. Themethod efficiency depends on selecting the optimized
amounts of controlling parameter. In this paper, the ALSis improved by optimal regulating of controlling
parameters. The proposedmethod contains two phases. In the first phase,the initial contour of ALS
determined via the SFCM and image features are extracted. Then, the optimal controlling parameters of
ALS are determined by a genetic algorithm.By applying image features and optimal controlling parameters
to the generalized regression neural network(GRNN), a neural system is trained. In the second phase, the
initial contour is specified and image features are extracted as inputs to trained neural network from
phase1. Thus, the outputs of neural network are used as ALS controlled parameters. The resultsshow that
the accuracy of proposed ALS is improvedabout 1.4 %with respect to the ALS method. The proposed ALS
not only retains the speed but also has a higher accuracy.
Keywords
Automatic level set, spatial FCM clustering, genetic,generalized regression neural network.
1. INTRODUCTION
Brain is the body control center and one of the most important organs in the body, so the health of
this organ is essential. There are a lot of diseases which threaten the brain health and cause a lot of
irreparable damages such as multiple sclerosis (MS). However, the recognition of the brain
diseases is a vital for remedy and medical treatment. Progress in technology has caused various
imaging modalities(like CT, XRAY, MRI, US, PET, and SPET) in order to image from different
organs in the body. Mentioned imaging techniques play important roles in recognizing the illness.
Magnetic resonance image (MRI) is used to study brain tissues because there is a higher resolution
between different tissues and it also has a higher safety in contrast with other modalities.
Extracting the gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF)regions is the
most important and challenging phase in analyzing the brain MRI. After extracting the considered
tissues, their structural features (like size, appearance, and shape)are analyzed and finally different
diseases can be recognized.
2. Applied Mathematics and Sciences: An International Journal (MathSJ ), Vol. 1, No. 3, December 2014
54
There area lot of methods available for MRI brain image segmentations[1]–[3]. The presented
methods are divided into three classes; including manual, semi-automatic, and automatic[4]. Due
to the complexity of brain tissues, noise, and poor contrast of brain MRI, the manual segmentation
of brain tissues is time consuming with a high error rate. Therefore, most of the presented methods
are implemented in an interactional experiment and semi-automatic[5]. In semi-automatic
algorithms, the initial segmentation and controlling parameters are characterized by the radiologist.
Semi-automatic methods depend on the observation and the interacting of the experts during
implementation. Thus,automatic methods are useful for brain tissue segmentation.
Since in medical images the transitional regions between different tissues are not crisp[6], in [7]–
[9],fuzzy inference systems (FIS) are used for image segmentation. FCM clustering is an
unsupervised technique, first proposed by Bezdek[10]. In traditional FCM, pixel intensity is used
for image segmentation, but in SFCM, with respect to adjacent pixel correlation, the neighborhood
pixel intensity is also used. This strategy enhances the noise effects[11]–[13].
The active models or deformable models are popular methods inmany applications. Deformable
models are divided into two groups: parametric and nonparametric. Level set is the subset of
deformable models and unlike the snake,it is a nonparametric model of that. Level set were first
introduced by Osher and Sethian[14]. It is an effective and efficient method for medical image
segmentation [15], [16]. Level set has some advantages and disadvantages; such that topology is
changeable but the computations are high and the speed is low. One of the biggest challenges in
implementing of the level set methods is to keep the level set function close tothe signed distance
function. To overcome this challenge the level set function need to re-initialized to the signed
distance function periodically.This procedure lead to an stable level set function, but it increases
the computational complexity which causes the speed reduced.In [17], a variational level set
formulation is used, where its speed has increased due to elimination of costly re-initialization
procedure. However this variational formulation has some controlling parameters that obtained by
try and error. Also the initial segmentation determines by the user and therefor this method is a
semi-automatic. In order to solve this problem,the integrated techniques presented [4], [15]. In the
proposed method in [15], the initial segmentation done by fuzzy clustering whose results enhanced
by morphological operations and then the variational level set implemented for the final
segmentation. An automatic segmentation method also proposed in [4] which the initial
segmentation and also the regulation of controlling parameters done by SFCM and the variational
level set used for final segmentation.Nevertheless, the exact and optimally determination of the
amount of controlling parameters affects the algorithm efficiency. In this paper,an automatic
method is proposed that initial segmentation done by SFCM and variational level setimplemented
for final segmentation whose controlling parameters are regulated precisely and optimally. The
proposed method for optimally adjusting controlling parametershas two phases. In the first phase,
the image features are extracted and the optimal amount of the level set controlling parameters are
obtained by genetic algorithm. Then, a learning process will run by using generalized regression
neural network.In the secondphase, theperformance of the automatic level set has been evaluated
whose controlling parameters have been regulated by the neural network resulted from the first
phase. The obtained results show that besides retaining the speed, the accuracy of the proposed
automatic level set method has also improved.
The remaining sections of this paper are organized as follow. The proposed method is explained in
Section2. Experimental results are reported in Section 3 and finally the conclusion is expressed in
Section4.
3. Applied Mathematics and Sciences: An International Journal (MathSJ ), Vol. 1, No. 3, December 2014
55
2. PROPOSED METHOD
The block diagram of the proposed algorithm is shown in Figure I. The method is implemented in
two phases. In the first phase, by applying extracted features as input and optimal ALS controlling
parameters (yielded by genetic algorithm)as the target the GRNNis trained. The aim of the second
phase is to evaluate the performance of the automatic level set whose controlling parameters are
adjusted optimally. Therefore,by extracting features and applying them to the trained neural
network, the values of ALS controlling parameters are determined. Finally, the level set
method,according to the optimal parameters, is implemented and itsperformanceis evaluated. The
results show that the accuracy of the automatic level set has improved with respect to other
existing methods.
The spatial FCM clustering process is explained in part A. the automatic level set is expressed in
part B. In part C, the extracted features are introduced and finally in part D and E the genetic
algorithm and the generalized regression neural network are explained, respectively.
Figure 1. Proposed method.
A. Spatial FCM Clustering
The main idea of FCM clustering is to distribute data within the clusters in a way that the data
within the same clusters are sufficiently similar and the data in different clusters are sufficiently
different. In the standard FCM clustering, the center of each subclass i
v and membership function
ij
u are computed from the (1) and (2)subject to condition (3)in order to optimize (4)[11].
1
1
n m
ij jj
i n m
ijj
u x
v
u
=
=
=
∑
∑
(1)
2
2
1
21
1
ij
m
c j i
k
k i
u
x v
x v
−
=
=
−
−
∑
(2)
1
1 , 1,
c
ij
i
u j n
=
= ∀ = …∑ (3)
4. Applied Mathematics and Sciences: An International Journal (MathSJ ), Vol. 1, No. 3, December 2014
56
2 2
1 1 1 1
c n c n
m m
ij ij ij j i
i j i j
J u d u x v
= = = =
= = −∑∑ ∑∑ (4)
where i
x is the specific image pixel, ij
u represents the membership of the jth
pixel to the ith
cluster,
i
v is the centroid of ith
cluster, . denote a norm metric, and m is a parameter that controls the
fuzziness of the obtained segmentation results. In this paper, SFCM is used in order to perform the
initial segmentation and to set the initial contour of the automatic level set. In SFCM, unlike FCM,
the spatial information is used for membership function computation as
( )j
ij ik
k NB x
h u
∈
= ∑ (5)
where NB(xj) is a square window 5×5 in the spatial domain with jx as center. ij
h also shows the
membership of the jth
data in the ith
cluster vi.If a large number of neighbors of a pixel belongs to
one cluster, the spatial function of that pixel will be larger. Finally, the spatial function is used to
compute the membership function as[13]
1
p q
ij ij
ij c p q
kj kjk
u h
u
u h=
=
∑
(6)
where p and q are the parameters that control the importance of u and h, respectively.
B. LevelSet
The automatic level set method, whose controlling parameter values are optimized, is used for
final brain MRI segmentation. Level set starts with a closed boundary (initial contour) and
deformsstep by stepwith operations such as shrinking and expanding, according to the image
restrictions. The level set is defined by Lipschitzfunction ( ), :x y Rφ → on an image, where
( ),x yφ is called the level set function and is defined by the Γ boundary as
( ) ( )
( ) ( )
( ) ( )
, , 0 , ( )
, , 0 , ( )
, , 0 , ( )
t x y x y is inside t
t x y x y is at t
t x y x y is outside t
φ
φ
φ
< Γ
= Γ
> Γ
. (7)
On the other hand, ( )Γ t ischaracterized by a particular level which is usually the zero level of the
function ( )t, x, yφ at time t. In general, ( )Γ t evolves according to the following nonlinear partial
differential (PDE) equation
( ) ( )0
0
0, , ,
F
t
x y x y
φ
φ
φ φ
∂
+ ∇ =
∂
=
. (8)
In the standard level set, the initial contour( )0φ is determined by the user. In addition, that method
has some disadvantages. For example, since the standard level set function converts the two-
5. Applied Mathematics and Sciences: An International Journal (MathSJ ), Vol. 1, No. 3, December 2014
57
dimensional segmentation to three-dimensional, the computational complexity is increased. Also,
because of the periodically re-initialization of the level set function, during evolution, the speed of
this method is low.Here, a variationallevel set formulation is used in which by eliminating re-
initialization procedure the speed of the algorithm is increased. Evolution equation of the level set
is [17]
( ) (g, )
t
φ
µζ φ ξ φ
∂
= +
∂
(9)
wherethe first term ( )ζ φ is called the penalty term which keeps the level set function close to the
signed distance function and prevents its deviation
( )ζ ∆ ( )div
φ
φ φ
φ
∇
= −
∇
. (10)
The second term (g, )ξ φ , like standard level set method,attractφ towards the variationalboundary
( ) ( ) ( )ξ g, g gdiv
φ
φ λδ φ υ δ φ
φ
∇
= +
∇
(11)
where g isthe edge detection function that stopsthe level set evolution near the optimal solution.
Itis defined as
2
1
g
1 ( * )G Iσ
=
+ ∇
. (12)
In theproposed method, user interaction for determination of the initial contour of the level set is
eliminated and it is determine by SFCM method which has been explained in the previous
part.Therefore, 0φ is initialized by
( ) ( )0 0 0
, 4 0.5 , , (0,1)k k k
x y B B R b bφ ε= − − = ≥ ò (13)
where Rk is the image, resulted from SFCM method, and ε is a constant regulating the Dirac
function that is defined as
( )
( )
0,
1
1 ,
2
x
x x
cos x
ε
ε
δ π
ε
ε ε
>
=
+ ≤
. (14)
Therefore, the level set function starts from an arbitrary binary region
( )
( )0
0
, , 0
,
,
C x y
x y
C otherwise
φ
φ
<
=
−
. (15)
Finally, the evolution of the level set is
( ) ( )1
( , ) ( , ) g,
k k k k
x y x yφ φ τ µζ φ ξ φ
+
= + + (16)
6. Applied Mathematics and Sciences: An International Journal (MathSJ ), Vol. 1, No. 3, December 2014
58
There are some controlling parameters in(16)that are shown in Table I.
TABLE I. AUTOMATIC LEVEL SET CONTROLLING PARAMETERS
paramet
er
Significance
σ
Controlling the spread of gaussian
smoothing function
C
Controlling the gradient strength
of initial level set function
ε Regulating for dirac function
µ
Weighting coefficient of the
penalty term
λ
Coeffecient of the contour length
for smoothness regulation
ν Artificial ballon force
τ Time step of level set evolution
0b
Threshold for convert fuzzy to
crisp
The final segmentation results strongly depend on these parameter values. The method for
optimally adjusting these controlling parameters is shown in Figure I. The exact and optimal
values of the level set controlling parameters are obtained by the generalized regression neural
network. This network has been trained in the first phase by applying the extracted features of the
images as the input and the optimal values of controlling parameters as the output, which is
obtained by the genetic algorithm.
C. Feature Extraction
In the proposed method extracted features are as the input of neural network. These are divided in
3 categories:
• Pixel value measurements features.
• Shape measurements feature.
• Zernike features.
7. Applied Mathematics and Sciences: An International Journal (MathSJ ), Vol. 1, No. 3, December 2014
59
The used pixel and shape features are mentioned in Table II.
TABLE II. PIXEL AND SHAPE MEASUREMENT FEATURES
Shape measurement features Pixel value measurement features
Area Max intensity
Centroid Min intensity
Bounding box Mean intensity
Major axis length -
Minor axis length -
Eccentricity -
Orientation -
Convex area
Filled area
Euler number
Equive diameter
Solidity
Extent
Perimeter
-
-
-
-
-
-
For Zernike feature extraction, the radial polynomials, Zernike basis function and finally Zernike
moments are computed[18]. For image with N N× size, the discrete form of the Zernike moments
is expressed as
( ) ( )
1 1
*
, ,
0 0
1
. , . ,
N N
n m n m
c rN
n
Z f x y V x y
λ
− −
= =
+
= ∑∑ (17)
where n is the order of the radial polynomial and m is the repetition of the azimuthal angle to
satisfy the following constraints
0
2
n
m n
n m k
≥
≤
− =
. (18)
In this work the following Zernike moments are used
{ },
| , 1 20n m
Z n m n= ≤ ≤ . (19)
More details on computing Zernike moments can be found in[18], [19].
D. Genetic Algorithm
The genetic algorithm (GA) belongs to a class of population-basedstochastic algorithms that are
inspired from principles of natural evolution known as evolutionary algorithms. GA is based on
the principles of “survival of fittest”, as in the natural phenomena of genetic inheritance and
Darwinian strife for survival. It was first introduced by john Holland in[20]. In the proposed
method, GA has been used in the first phase to find the optimal values of the ALS controlling
parameters. GA works with a population of chromosomes (individuals or solutions). Each
chromosome has 8 gens, containing real numbers, which show different controlling parameters.
8. Applied Mathematics and Sciences: An International Journal (MathSJ ), Vol. 1, No. 3, December 2014
60
First, the initial population (called the first generation) of 30 chromosomes are randomly produced.
In each generation, two different parents are selected, by roulette wheel selection method, from the
current population to swap information between them to generate two new offspring by crossover
operation as
( )
( )
1 1 1 2
2 1 1 2
c b p b p
c b p b p
= × + − ×
= − × + ×
(20)
where p1 and p2 vectors are two parent’s chromosome and b is a random number between 0 and 1.
The idea behind the roulette wheel selection method is that the individuals with higher fitness have
more probability of selection. So a fitness function is defined to compute the fitness value.Fitness
function computes the ALS segmentation accuracy for each chromosome.
Then, the mutation is applied. In order to perform the mutation operator the new child is produced
quite randomly from the search space. Rudolph in [21] proved that in a genetic algorithm, in each
production of the new generation if the best chromosome (expert person) of the previous
generation transfers to the new generation the algorithm will be converged. Therefore, in GA some
chromosomes are transferred to the next generation, which its rate is 0.5. The crossover rate is 0.85
and the mutation rate is 0.1.
E. Generalized regression neural network
Nowadays computational intelligence such as neural network, inspired from the human’s
brain,has an important rule to solve the problems in different fields. Radial basis function (RBF)
neural network is the special kind of neural networks which creates mapping from the input space
to the output space. The generalized regression neural network (GRNN) is an enhance of the
RBF neural network which differs from it at the third layer as shown in Figure 2. GRNN is a third
layer neural network, based on nonlinear regression theory that is able to estimate the nonlinear
functions. GRNN is less sensitive with respect to the unstable inputs and it can be trained more
quickly. GRNN has an input layer, a radial basis, and a special linear layer, in which the number
of neurons of the radial basis layer is equal to the input size.
Figure 2. GRNN structure.
In GRNN the radial basis function related to the ith
neuron in the radial basis layer is computed by
2
2
exp i
i
X X
G
γ
σ
−
=
(21)
9. Applied Mathematics and Sciences: An International Journal (MathSJ ), Vol. 1, No. 3, December 2014
61
where . is the Euclidean norm metric, σis the spread rate of the radial basis function (is chosen
0.5), X i is the ith
learning pattern vector, and γis the constant coefficient equals to 0.5.
Figure 3. Segmentation results: a) main brain MRI, b) CSF tissue, Lee algorithm, c)CSF tissue, poposed
method, d)GM tissue, Lee algorithm, e)GM tissue, poposed method, f)WM tissue, Lee algorithm, g)WM
tissue, poposed method.
3. EXPERIMENTAL RESULTS
The Brain Web MRI database is used to evaluate the performance of the proposed algorithm. The
method has been implemented by MATLAB software.Database contains 270 MRI which are
randomly divided in a way that 70% of images are used in the first phase in order to establish the
neural network and 30% of images are used in the second phase in order to evaluate the proposed
method.The method is compared with Lee et al. method[4]. Three criterions including accuracy,
specificity, and sensitivity areused for evaluation purposes as [22]
( ) ( )/Accuracy TP TN TP TN FP FN= + + + + (1.1)
( )/Specificity TN TN FP= + (1.2)
( )/Sensitivity TP TP FN= + . (1.3)
The obtained results are shown in Table III. As it is clear from this table, the proposed algorithm
has a better efficiency compared to the automatic level set Lee et al.
10. Applied Mathematics and Sciences: An International Journal (MathSJ ), Vol. 1, No. 3, December 2014
62
TABLE III. EVALUATION OF METHODS
Algorit
hm
Criteria
Accuracy Specificity
Sensitivit
y
Our
method
97.03 98.24 87.94
Lee
method
95.66 97.47 82.20
4. CONCLUSION
Automatic level set method is a powerful and quick method for brain tissue segmentation. Since
the segmentation accuracy in this method strongly depends on the ALS controlling parameters, a
new algorithm is presented to choose the best values. The proposed method contained two phases.
In the first phase, the genetic optimization algorithm was used to find the optimal values of
controlling parameters and GRNN was used to learn them. In the second phase, image features
were extracted and applied to the neural network yielded from the first phase. Level set was
implemented by the extracted values of controlling parameters from neural network.
Experimental results showed that automatic level set, whose controlling parameters have been
accurately adjusted had better performance. Therefore, the proposed level set not only retained
the speed but also extracted the brain tissues more accurately.
In the future work, our goal is to use other optimization and classification algorithms in order to
find optimal values of controlling parameters to yield a higher accuracy for brain tissue
segmentation.
REFERENCES
[1] D. L. Pham, C. Xu, and J. L. Prince, “Current methods in medical image segmentation 1,” Annu. Rev.
Biomed. Eng., vol. 2, no. 1, pp. 315–337, 2000.
[2] A. W.-C. Liew and H. Yan, “Current methods in the automatic tissue segmentation of 3D magnetic
resonance brain images,” Curr. Med. Imaging Rev., vol. 2, no. 1, pp. 91–103, 2006.
[3] M. A. Balafar, A. R. Ramli, M. I. Saripan, and S. Mashohor, “Review of brain MRI image
segmentation methods,” Artif. Intell. Rev., vol. 33, no. 3, pp. 261–274, 2010.
[4] B. N. Li, C. K. Chui, S. Chang, and S. H. Ong, “Integrating spatial fuzzy clustering with level set
methods for automated medical image segmentation,” Comput. Biol. Med., vol. 41, no. 1, pp. 1–10,
2011.
[5] K. Levinski, A. Sourin, and V. Zagorodnov, “Interactive surface-guided segmentation of brain MRI
data,” Comput. Biol. Med., vol. 39, no. 12, pp. 1153–1160, 2009.
[6] K. C.-R. Lin, M.-S. Yang, H.-C. Liu, J.-F. Lirng, and P.-N. Wang, “Generalized Kohonen’s
competitive learning algorithms for ophthalmological MR image segmentation,” Magn. Reson.
Imaging, vol. 21, no. 8, pp. 863–870, 2003.
[7] S. Shen, W. Sandham, M. Granat, and A. Sterr, “MRI fuzzy segmentation of brain tissue using
neighborhood attraction with neural-network optimization,” Inf. Technol. Biomed. IEEE Trans. On,
vol. 9, no. 3, pp. 459–467, 2005.
[8] M.-S. Yang, Y.-J. Hu, K. C.-R. Lin, and C. C.-L. Lin, “Segmentation techniques for tissue
differentiation in MRI of ophthalmology using fuzzy clustering algorithms,” Magn. Reson. Imaging,
vol. 20, no. 2, pp. 173–179, 2002.
[9] D.-Q. Zhang and S.-C. Chen, “A novel kernelized fuzzy c-means algorithm with application in
medical image segmentation,” Artif. Intell. Med., vol. 32, no. 1, pp. 37–50, 2004.
[10] J. C. Bezdek, Pattern recognition with fuzzy objective function algorithms. Kluwer Academic
Publishers, 1981.
11. Applied Mathematics and Sciences: An International Journal (MathSJ ), Vol. 1, No. 3, December 2014
63
[11] H. Yousefi-Banaem, S. Kermani, O. Sarrafzadeh, and D. Khodadad, “An improved spatial FCM
algorithm for cardiac image segmentation,” in Fuzzy Systems (IFSC), 2013 13th Iranian Conference
on, 2013, pp. 1–4.
[12] W. Cai, S. Chen, and D. Zhang, “Fast and robust fuzzy c-means clustering algorithms incorporating
local information for image segmentation,” Pattern Recognit., vol. 40, no. 3, pp. 825–838, 2007.
[13] K.-S. Chuang, H.-L. Tzeng, S. Chen, J. Wu, and T.-J. Chen, “Fuzzy c-means clustering with spatial
information for image segmentation,” Comput. Med. Imaging Graph., vol. 30, no. 1, pp. 9–15, 2006.
[14] S. Osher and J. A. Sethian, “Fronts propagating with curvature-dependent speed: algorithms based on
Hamilton-Jacobi formulations,” J. Comput. Phys., vol. 79, no. 1, pp. 12–49, 1988.
[15] B. N. Li, C. K. Chui, S. H. Ong, and S. Chang, “Integrating FCM and level sets for liver tumor
segmentation,” in 13th International Conference on Biomedical Engineering, 2009, pp. 202–205.
[16] N. Paragios, “A level set approach for shape-driven segmentation and tracking of the left ventricle,”
Med. Imaging IEEE Trans. On, vol. 22, no. 6, pp. 773–776, 2003.
[17] C. Li, C. Xu, C. Gui, and M. D. Fox, “Level set evolution without re-initialization: a new variational
formulation,” in Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer
Society Conference on, 2005, vol. 1, pp. 430–436.
[18] A. Tahmasbi, F. Saki, H. Aghapanah, and S. B. Shokouhi, “A novel breast mass diagnosis system
based on Zernike moments as shape and density descriptors,” in Biomedical Engineering (ICBME),
2011 18th Iranian Conference of, 2011, pp. 100–104.
[19] A. Tahmasbi, F. Saki, and S. B. Shokouhi, “Classification of benign and malignant masses based on
Zernike moments,” Comput. Biol. Med., vol. 41, no. 8, pp. 726–735, 2011.
[20] J. H. Holland, Adaptation in natural and artificial systems: An introductory analysis with applications
to biology, control, and artificial intelligence. U Michigan Press, 1975.
[21] G. Rudolph, “Convergence analysis of canonical genetic algorithms,” Neural Netw. IEEE Trans. On,
vol. 5, no. 1, pp. 96–101, 1994.
[22] M. Jafari and S. Kasaei, “Automatic Brain Tissue Detection in Mri Images Using Seeded Region
Growing Segmentation and Neural Network Classification.,” J. Appl. Sci. Res., vol. 7, no. 8, 2011.