The document discusses techniques for reconstructing missing data values in datasets using an artificial neural network approach. It presents a successive iteration method for determining approximate values to replace missing data that is based on successive approximations. This technique iteratively calculates the mean value of an attribute until it approximates the missing value, which is then replaced. The method is compared to other techniques like omitting values or replacing with mean. It is found to provide more accurate results.
AN EFFICIENT PSO BASED ENSEMBLE CLASSIFICATION MODEL ON HIGH DIMENSIONAL DATA...ijsc
As the size of the biomedical databases are growing day by day, finding an essential features in the disease prediction have become more complex due to high dimensionality and sparsity problems. Also, due to the
availability of a large number of micro-array datasets in the biomedical repositories, it is difficult to analyze, predict and interpret the feature information using the traditional feature selection based classification models. Most of the traditional feature selection based classification algorithms have computational issues such as dimension reduction, uncertainty and class imbalance on microarray datasets. Ensemble classifier is one of the scalable models for extreme learning machine due to its high efficiency, the fast processing speed for real-time applications. The main objective of the feature selection
based ensemble learning models is to classify the high dimensional data with high computational efficiency
and high true positive rate on high dimensional datasets. In this proposed model an optimized Particle swarm optimization (PSO) based Ensemble classification model was developed on high dimensional microarray
datasets. Experimental results proved that the proposed model has high computational efficiency compared to the traditional feature selection based classification models in terms of accuracy , true positive rate and error rate are concerned.
USING ARTIFICIAL NEURAL NETWORK IN DIAGNOSIS OF THYROID DISEASE: A CASE STUDYijcsa
Nowadays, one of the main issues to create challenges in medicine sciences by developing technology is the
disease diagnosis with high accuracy. In the recent decades, Artificial Neural Networks (ANNs) are considered as the best solutions to achieve this goal and involve in widespread researches to diagnose the diseases. In this paper, we consider a Multi-layer Perceptron (MLP) ANN using back propagation learning algorithm to classify Thyroid disease. It consists of an input layer with 5 neurons, a hidden layer with 6 neurons and an output layer with just 1 neuron. The suitable selection of activation function and the number of neurons in the hidden layer and also the number of layers are achieved using test and error method. Our simulation results indicate that the performed optimization in MLP ANNs can be reached the accuracy level to 98.6%.
Artificial neural networks (ANN) consider classification as one of the most dynamic research and
application areas. ANN is the branch of Artificial Intelligence (AI). The neural network was trained by
back propagation algorithm. The different combinations of functions and its effect while using ANN as a
classifier is studied and the correctness of these functions are analyzed for various kinds of datasets. The
back propagation neural network (BPNN) can be used as a highly successful tool for dataset classification
with suitable combination of training, learning and transfer functions. When the maximum likelihood
method was compared with backpropagation neural network method, the BPNN was more accurate than
maximum likelihood method. A high predictive ability with stable and well functioning BPNN is possible.
Multilayer feed-forward neural network algorithm is also used for classification. However BPNN proves to
be more effective than other classification algorithms.
An Artificial Neural Network Model for Neonatal Disease DiagnosisWaqas Tariq
The significance of disease diagnosis by artificial intelligence is not obscure now days. The increasing demand of Artificial Neural Network application for predicting the disease shows better performance in the field of medical decision making. This paper represents the use of artificial neural networks in predicting neonatal disease diagnosis. The proposed technique involves training a Multi Layer Perceptron with a BP learning algorithm to recognize a pattern for the diagnosing and prediction of neonatal diseases. A comparative study of using different training algorithm of MLP, Quick Propagation, Conjugate Gradient Descent, shows the higher prediction accuracy. The Backpropogation algorithm was used to train the ANN architecture and the same has been tested for the various categories of neonatal disease. About 94 cases of different sign and symptoms parameter have been tested in this model. This study exhibits ANN based prediction of neonatal disease and improves the diagnosis accuracy of 75% with higher stability. Key words: Artificial Intelligence, Multi Layer Perceptron, Neural Network, Neonate
Sample size determination for classification of eeg signals using power analy...iaemedu
The document discusses determining the minimum sample size needed for classification of electroencephalogram (EEG) signals using machine learning. It proposes using power analysis to calculate the required sample size to separate classes with statistical stability. Power analysis was performed on a dataset of 500 EEG signals from 5 classes. The results found that a sample size of 81 signals is needed to achieve 95% power. Additional experiments varied the power level and error probability to relate their effects on minimum sample size. The sample sizes calculated from power analysis were validated using a decision tree classifier on the EEG dataset.
Sample size determination for classification of eeg signals using power analy...iaemedu
The document discusses determining the minimum sample size needed for classification of electroencephalogram (EEG) signals using machine learning. It proposes using power analysis to calculate the required sample size to separate classes with statistical stability. Power analysis was performed on a dataset of 500 EEG signals from 5 classes. The results found that a sample size of 81 signals is needed to achieve 95% power. Additional experiments varied the power level and error probability to relate their effects on minimum sample size. The sample sizes calculated from power analysis were validated using a decision tree classifier on the EEG dataset.
Application of Hybrid Genetic Algorithm Using Artificial Neural Network in Da...IOSRjournaljce
The main purpose of data mining is to extract knowledge from large amount of data. Artificial Neural network (ANN) has already been applied in a variety of domains with remarkable success. This paper presents the application of hybrid model for stroke disease that integrates Genetic algorithm and back propagation algorithm. Selecting a good subset of features, without sacrificing accuracy, is of great importance for neural networks to be successfully applied to the area. In addition the hybrid model that leads to further improvised categorization, accuracy compared to the result produced by genetic algorithm alone. In this study, a new hybrid model of Neural Networks and Genetic Algorithm (GA) to initialize and optimize the connection weights of ANN so as to improve the performance of the ANN and the same has been applied in a medical problem of predicting stroke disease for verification of the results.
AN EFFICIENT PSO BASED ENSEMBLE CLASSIFICATION MODEL ON HIGH DIMENSIONAL DATA...ijsc
As the size of the biomedical databases are growing day by day, finding an essential features in the disease prediction have become more complex due to high dimensionality and sparsity problems. Also, due to the
availability of a large number of micro-array datasets in the biomedical repositories, it is difficult to analyze, predict and interpret the feature information using the traditional feature selection based classification models. Most of the traditional feature selection based classification algorithms have computational issues such as dimension reduction, uncertainty and class imbalance on microarray datasets. Ensemble classifier is one of the scalable models for extreme learning machine due to its high efficiency, the fast processing speed for real-time applications. The main objective of the feature selection
based ensemble learning models is to classify the high dimensional data with high computational efficiency
and high true positive rate on high dimensional datasets. In this proposed model an optimized Particle swarm optimization (PSO) based Ensemble classification model was developed on high dimensional microarray
datasets. Experimental results proved that the proposed model has high computational efficiency compared to the traditional feature selection based classification models in terms of accuracy , true positive rate and error rate are concerned.
USING ARTIFICIAL NEURAL NETWORK IN DIAGNOSIS OF THYROID DISEASE: A CASE STUDYijcsa
Nowadays, one of the main issues to create challenges in medicine sciences by developing technology is the
disease diagnosis with high accuracy. In the recent decades, Artificial Neural Networks (ANNs) are considered as the best solutions to achieve this goal and involve in widespread researches to diagnose the diseases. In this paper, we consider a Multi-layer Perceptron (MLP) ANN using back propagation learning algorithm to classify Thyroid disease. It consists of an input layer with 5 neurons, a hidden layer with 6 neurons and an output layer with just 1 neuron. The suitable selection of activation function and the number of neurons in the hidden layer and also the number of layers are achieved using test and error method. Our simulation results indicate that the performed optimization in MLP ANNs can be reached the accuracy level to 98.6%.
Artificial neural networks (ANN) consider classification as one of the most dynamic research and
application areas. ANN is the branch of Artificial Intelligence (AI). The neural network was trained by
back propagation algorithm. The different combinations of functions and its effect while using ANN as a
classifier is studied and the correctness of these functions are analyzed for various kinds of datasets. The
back propagation neural network (BPNN) can be used as a highly successful tool for dataset classification
with suitable combination of training, learning and transfer functions. When the maximum likelihood
method was compared with backpropagation neural network method, the BPNN was more accurate than
maximum likelihood method. A high predictive ability with stable and well functioning BPNN is possible.
Multilayer feed-forward neural network algorithm is also used for classification. However BPNN proves to
be more effective than other classification algorithms.
An Artificial Neural Network Model for Neonatal Disease DiagnosisWaqas Tariq
The significance of disease diagnosis by artificial intelligence is not obscure now days. The increasing demand of Artificial Neural Network application for predicting the disease shows better performance in the field of medical decision making. This paper represents the use of artificial neural networks in predicting neonatal disease diagnosis. The proposed technique involves training a Multi Layer Perceptron with a BP learning algorithm to recognize a pattern for the diagnosing and prediction of neonatal diseases. A comparative study of using different training algorithm of MLP, Quick Propagation, Conjugate Gradient Descent, shows the higher prediction accuracy. The Backpropogation algorithm was used to train the ANN architecture and the same has been tested for the various categories of neonatal disease. About 94 cases of different sign and symptoms parameter have been tested in this model. This study exhibits ANN based prediction of neonatal disease and improves the diagnosis accuracy of 75% with higher stability. Key words: Artificial Intelligence, Multi Layer Perceptron, Neural Network, Neonate
Sample size determination for classification of eeg signals using power analy...iaemedu
The document discusses determining the minimum sample size needed for classification of electroencephalogram (EEG) signals using machine learning. It proposes using power analysis to calculate the required sample size to separate classes with statistical stability. Power analysis was performed on a dataset of 500 EEG signals from 5 classes. The results found that a sample size of 81 signals is needed to achieve 95% power. Additional experiments varied the power level and error probability to relate their effects on minimum sample size. The sample sizes calculated from power analysis were validated using a decision tree classifier on the EEG dataset.
Sample size determination for classification of eeg signals using power analy...iaemedu
The document discusses determining the minimum sample size needed for classification of electroencephalogram (EEG) signals using machine learning. It proposes using power analysis to calculate the required sample size to separate classes with statistical stability. Power analysis was performed on a dataset of 500 EEG signals from 5 classes. The results found that a sample size of 81 signals is needed to achieve 95% power. Additional experiments varied the power level and error probability to relate their effects on minimum sample size. The sample sizes calculated from power analysis were validated using a decision tree classifier on the EEG dataset.
Application of Hybrid Genetic Algorithm Using Artificial Neural Network in Da...IOSRjournaljce
The main purpose of data mining is to extract knowledge from large amount of data. Artificial Neural network (ANN) has already been applied in a variety of domains with remarkable success. This paper presents the application of hybrid model for stroke disease that integrates Genetic algorithm and back propagation algorithm. Selecting a good subset of features, without sacrificing accuracy, is of great importance for neural networks to be successfully applied to the area. In addition the hybrid model that leads to further improvised categorization, accuracy compared to the result produced by genetic algorithm alone. In this study, a new hybrid model of Neural Networks and Genetic Algorithm (GA) to initialize and optimize the connection weights of ANN so as to improve the performance of the ANN and the same has been applied in a medical problem of predicting stroke disease for verification of the results.
IRJET- Analysis of Autism Spectrum Disorder using Deep Learning and the Abide...IRJET Journal
The document discusses analyzing autism spectrum disorder using deep learning and the ABIDE dataset. It summarizes previous literature on identifying ASD from brain imaging data using machine learning algorithms. Specifically, it examines using the ABIDE dataset, which contains brain imaging data from over 1,000 individuals with ASD and controls from multiple sites. Deep learning methods were able to reliably classify ASD versus controls from the multi-site dataset with 70% accuracy, identifying patterns of hypo-connectivity between anterior and posterior brain regions in ASD individuals. The areas of the brain that most contributed to differentiating ASD from controls according to the deep learning model are also identified.
Evaluation of Default Mode Network In Mild Cognitive Impairment and Alzheimer...CSCJournals
Although progressive functional brain network disorders has been one of the indication of Alzheimer's disease, The current research on aging and dementia focus on diagnostics of the cognitive changes of normal aging and Alzheimer Disease (AD), these changes known as Mild Cognitive Impairment (MCI). The default mode network (DMN) is a network of interacting brain regions known to have activity highly correlated with each other and distinct from other networks in the brain, the default mode network is active during passive rest and consists of a set of brain areas that are tightly functionally connected and distinct from other systems within the brain. Anatomically, the DMN includes the posterior cingulated cortex (PCC), dorsal and ventral medial prefrontal cortex, the lateral parietal cortex, and the medial temporal lobes. DMN involves multiple anatomical networks that converge on cortical hubs, such as the PCC, ventral medial prefrontal, and inferior parietal cortices. The aim of this study was to evaluate the default mode network functional connectivity in MCI patients. While no treatments are recommended for MCI currently, Mild Cognitive Impairment is becoming a very important subject for researchers and deserves more recognition and further study, In order to increase the ability to recognize earlier symptoms of Alzheimer's disease.
A novel framework for efficient identification of brain cancer region from br...IJECEIAES
Diagnosis of brain cancer using existing imaging techniques, e.g., Magnetic Resonance Imaging (MRI) is shrouded with various degrees of challenges. At present, there are very few significant research models focusing on introducing some novel and unique solutions towards such problems of detection. Moreover, existing techniques are found to have lesser accuracy as compared to other detection schemes. Therefore, the proposed paper presents a framework that introduces a series of simple and computationally cost-effective techniques that have assisted in leveraging the accuracy level to a very higher degree. The proposed framework takes the input image and subjects it to non-conventional segmentation mechanism followed by optimizing the performance using directed acyclic graph, Bayesian Network, and neural network. The study outcome of the proposed system shows the significantly higher degree of accuracy in detection performance as compared to frequently existing approaches.
MOST READ ARTICLES IN ARTIFICIAL INTELLIGENCE - International Journal of Arti...gerogepatton
The International Journal of Artificial Intelligence & Applications (IJAIA) is a bi monthly open access peer-reviewed journal that publishes articles which contribute new results in all areas of the Artificial Intelligence & Applications (IJAIA). It is an international journal intended for professionals and researchers in all fields of AI for researchers, programmers, and software and hardware manufacturers. The journal also aims to publish new attempts in the form of special issues on emerging areas in Artificial Intelligence and applications.
During seizures, different types of communication between different parts of the brain are characterized by many state of the art connectivity measures. We propose to employ a set of undirected (spectral matrix, the inverse of the spectral matrix, coherence, partial coherence, and phase-locking value) and directed features (directed coherence, the partial directed coherence) to detect seizures using a deep neural network. Taking our data as a sequence of ten sub-windows, an optimal deep sequence learning architecture using attention, CNN, BiLstm, and fully connected neural networks is designed to output the detection label and the relevance of the features. The relevance is computed using the weights of the model in the activation values of the receptive fields at a particular layer. The best model resulted in 97.03% accuracy using balanced MIT-BIH data subset. Finally, an analysis of the relevance of the features is reported.
Integrated Modelling Approach for Enhancing Brain MRI with Flexible Pre-Proce...IJECEIAES
The assurance of an information quality of the input medical image is a critical step to offer highly precise and reliable diagnosis of clinical condition in human. The importance of such assurance becomes more while dealing with important organ like brain. Magnetic Resonance Imaging (MRI) is one of the most trusted mediums to investigate brain. Looking into the existing trends of investigating brain MRI, it was observed that researchers are more prone to investigate advanced problems e.g. segmentation, localization, classification, etc considering image dataset. There is less work carried out towards image preprocessing that potential affects the later stage of diagnosing. Therefore, this paper introduces a novel model of integrated image enhancement algorithm that is capable of solving different and discrete problems of performing image pre-processing for offering highly improved and enhanced brain MRI. The comparative outcomes exhibit the advantage of its simplistic implemetation strategy.
IRJET- Effect of Principal Component Analysis in Lung Cancer Detection us...IRJET Journal
This document discusses using machine learning techniques to detect lung cancer from data more accurately and quickly. It summarizes that lung cancer is a leading cause of cancer death worldwide. Current diagnosis methods like CT scans can detect small lung lesions but take time. The document proposes using machine learning algorithms on lung cancer data to classify and detect cancer, aiming to diagnose it earlier. It discusses collecting lung cancer data from a repository and filtering/classifying it using methods like J48, principal component analysis, and comparing results to find the best detection method.
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
The document describes a proposed method for designing a classifier to detect diabetes using neural networks and the fuzzy k-nearest neighbor algorithm. The method would train a neural network using the fuzzy k-NN algorithm on a server and use it to classify diabetes on a mobile device for convenience. Analysis in WEKA showed the method achieved around 72-74% accuracy on 10-fold cross validation of a diabetes dataset with attributes removed. The proposed method is expected to perform comparably to support vector machines with less complexity.
IRJET- Human Heart Disease Prediction using Ensemble Learning and Particle Sw...IRJET Journal
The document discusses using ensemble learning and particle swarm optimization to predict heart disease. It aims to select the machine learning algorithm from among AdaBoost, gradient descent, random forest, decision tree and Gaussian naive Bayes that achieves the highest accuracy. Particle swarm optimization is used to select important predictive features from the dataset. The proposed approach uses AdaBoost and particle swarm optimization to achieve an accuracy of 84.88% in predicting heart disease, with an error rate of 4%.
CLASSIFICATION OF ALZHEIMER USING fMRI DATA AND BRAIN NETWORKcscpconf
Since the mid of 1990s, functional connectivity study using fMRI (fcMRI) has drawn increasing
attention of neuroscientists and computer scientists, since it opens a new window to explore
functional network of human brain with relatively high resolution. BOLD technique provides
almost accurate state of brain. Past researches prove that neuro diseases damage the brain
network interaction, protein- protein interaction and gene-gene interaction. A number of
neurological research paper also analyse the relationship among damaged part. By
computational method especially machine learning technique we can show such classifications.
In this paper we used OASIS fMRI dataset affected with Alzheimer’s disease and normal
patient’s dataset. After proper processing the fMRI data we use the processed data to form
classifier models using SVM (Support Vector Machine), KNN (K- nearest neighbour) & Naïve
Bayes. We also compare the accuracy of our proposed method with existing methods. In future,
we will other combinations of methods for better accuracy.
Life is the most precious gift to man and safeguarding this gift is of utmost importance.With
increasing number of diseases and fast paced lives, people have less time to look after themselves and
their family members or to even visit the doctor for regular check-ups.Our E-Health patient
monitoring system can remotely monitor the health of the patients and intimate the doctor of critical
conditions without human intervention. Some of the existing E-Health systems include telemedicine
network for Francophone African countries (RAFT) and LOBIN. RAFT is implemented in java and
uses asymmetric public – private key encryption, however it is expensive, does not support mobility
and is not a context aware system. LOBIN is a hardware/software platform to locate and monitor a set
of physiological parameters and context parameters of several patients within hospital facilities.
Although it is a context aware system it cannot handle high and concurrent data traffic load.
To overcome the above flaws, our proposed system puts forward an idea of patient monitoring
using various knowledge based techniques like K-means clustering, Gaussian kernel function, ANN
and Fuzzy inference engine. In our project we intend to do remote patient health monitoring in which
we will be using three-four machines which will send various sensed health parameters to the
centralised server that will make clusters of the sensed health parameters based on criticality of the
health condition. Then depending upon clusters formed and on comparison with the threshold values
appropriate reports will be generated and send to the doctors and caretakers.
TOP 1 CITED PAPER - International Journal of Artificial Intelligence & Appli...gerogepatton
The cuckoo search algorithm is a recently developed meta-heuristic optimization algorithm, which is suitable for solving optimization problems. To enhance the accuracy and convergence rate of this algorithm, an improved cuckoo search algorithm is proposed in this paper. Normally, the parameters of the cuckoo search are kept constant. This may lead to decreasing the efficiency of the algorithm. To cope with this issue, a proper strategy for tuning the cuckoo search parameters is presented. Then, it is employed for training feedforward neural networks for two benchmark classification problems. Finally, the performance of the proposed algorithm is compared with that of the standard cuckoo search. Simulation results demonstrate the effectiveness of the proposed algorithm.
An efficient feature selection algorithm for health care data analysisjournalBEEI
Diabete is a silent killer, which will slowly kill the person if it goes undetected. The existing system which uses F-score method and K-means clustering of checking whether a person has diabetes or not are 100% accurate, and anything which isn't a 100% is not acceptable in the medical field, as it could cost the lives of many people. Our proposed system aims at using some of the best features of the existing algorithms to predict diabetes, and combine these and based on these features; This research work turns them into a novel algorithm, which will be 100% accurate in its prediction. With the surge in technological advancements, we can use data mining to predict when a person would be diagnosed with diabetes. Specifically, we analyze the best features of chi-square algorithm and advanced clustering algorithm (ACA). This research work is done using the Pima Indian Diabetes dataset provided by National Institutes of Diabetes and Digestive and Kidney Diseases. Using classification theorems and methods we can consider different factors like age, BMI, blood pressure and the importance given to these attributes overall, and singles these attributes out, and use them for the prediction of diabetes.
Performance analysis of data mining algorithms with neural networkIAEME Publication
The document summarizes research combining neural networks with three data mining algorithms (CHARM, Top K Rules, and CM-SPAM) to improve data mining results. It first provides background on data mining and classification problems. It then discusses artificial neural networks and how they are trained. Next, it outlines how the three algorithms (CHARM, Top K Rules, CM-SPAM) can be integrated with neural networks for association rule mining and sequential pattern mining. The overall goal is to leverage neural networks to generate more accurate and useful patterns from large datasets.
Delineation of techniques to implement on the enhanced proposed model using d...ijdms
In post genomic era with the advent of new technologies a huge amount of complex molecular data are
generated with high throughput. The management of this biological data is definitely a challenging task
due to complexity and heterogeneity of data for discovering new knowledge. Issues like managing noisy
and incomplete data are needed to be dealt with. Use of data mining in biological domain has made its
inventory success. Discovering new knowledge from the biological data is a major challenge in data
mining technique. The novelty of the proposed model is its combined use of intelligent techniques to classify
the protein sequence faster and efficiently. Use of FFT, fuzzy classifier, String weighted algorithm, gram
encoding method, neural network model and rough set classifier in a single model and in an appropriate
place can enhance the quality of the classification system .Thus the primary challenge is to identify and
classify the large protein sequences in a very fast and easy but intellectual way to decrease the time
complexity and space complexity.
Early Identification of Diseases Based on Responsible Attribute using Data Mi...IRJET Journal
This document describes a proposed method for early identification of diseases using data mining and classification techniques. It begins with an introduction to classification and discusses how it is commonly used in healthcare for tasks like predicting patient risk levels. It then reviews related literature applying classification methods to diseases like heart disease and diabetes. The document outlines the problem of selecting the best classification technique for a given healthcare dataset. It proposes an architecture and method for disease prediction that assigns recommended values to attributes and classifies unknown data based on calculating totals. The method is experimentally analyzed using a heart disease dataset, and its accuracy is compared to Bayesian classification. In conclusion, the proposed method seeks to reduce attributes and complexity while accurately classifying patient data for early disease identification.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Correlation of artificial neural network classification and nfrs attribute fi...eSAT Journals
Abstract
Mostly 5 to 15% of the women in the stage of reproduction face the disease called Polycystic Ovarian Syndrome (PCOS) which is the multifaceted, heterogeneous and complex. The long term consequences diseases like endometrial hyperplasia, type 2 diabetes mellitus and coronary disease are caused by the polycystic ovaries, chronic anovulation and hyperandrogenism are characterized with the resistance of insulin and the hypertension, abdominal obesity and dyslipidemia and hyperinsulinemia are called as Metabolic syndrome (frequent metabolic traits) The above cause the common disease called Anovulatory infertility. Computer based information along with advanced Data mining techniques are used for appropriate results. Classification is a classic data mining task, with roots in machine learning. Naïve Bayesian, Artificial Neural Network, Decision Tree, Support Vector Machines are the classification tasks in the data mining. Feature selection methods involve generation of the subset, evaluation of each subset, criteria for stopping the search and validation procedures. The characteristics of the search method used are important with respect to the time efficiency of the feature selection methods. PCA (Principle Component Analysis), Information gain Subset Evaluation, Fuzzy rough set evaluation, Correlation based Feature Selection (CFS) are some of the feature selection techniques, greedy first search, ranker etc are the search algorithms that are used in the feature selection. In this paper, a new algorithm which is based on Fuzzy neural subset evaluation and artificial neural network is proposed which reduces the task of classification and feature selection separately. This algorithm combines the neural fuzzy rough subset evaluation and artificial neural network together for the better performance than doing the tasks separately.
Keywords: ANN, SVM, PCA, CFS
This document summarizes research on using knowledge-based systems and soft computing techniques in neuroscience. It provides an abstract and literature review on several expert systems and computational models that have been developed to diagnose neurological disorders like strokes and epilepsy. The literature review discusses systems that use fuzzy logic, neural networks, and case-based reasoning to classify symptoms and arrive at diagnoses. The goal of the research discussed is to develop innovative IT solutions to help doctors in rural areas diagnose and treat neurological patients.
This document discusses numerical methods for solving differential equations. It introduces direction fields, which provide a graphical approach to studying solutions, and Euler's method, which provides a numerical approach. Euler's method works by approximating the slope of the tangent line at each step using small step sizes to iteratively calculate successive approximations of the solution.
This document provides an overview of perturbation techniques for analyzing heat transfer problems. It discusses several objectives: to demonstrate the usefulness of perturbation techniques; to assist unfamiliar readers in understanding the techniques; and to show how the techniques are applied to specific problems. The document then reviews various perturbation methods - regular perturbation method, method of strained coordinates, method of matched asymptotic expansions, and method of extended perturbation series. It also discusses limitations and advantages of perturbation methods.
This document discusses Euler's method for numerically approximating solutions to first-order initial value problems. It begins by introducing Euler's method and its use of tangent lines to approximate the solution curve. Examples are provided to illustrate the application of the method and analyze errors compared to exact solutions. The discussion notes that Euler's method relies on a sequence of tangent lines to different solution curves, so accuracy depends on whether the family of solutions is converging or diverging. It emphasizes the importance of error bounds when exact solutions are unknown.
IRJET- Analysis of Autism Spectrum Disorder using Deep Learning and the Abide...IRJET Journal
The document discusses analyzing autism spectrum disorder using deep learning and the ABIDE dataset. It summarizes previous literature on identifying ASD from brain imaging data using machine learning algorithms. Specifically, it examines using the ABIDE dataset, which contains brain imaging data from over 1,000 individuals with ASD and controls from multiple sites. Deep learning methods were able to reliably classify ASD versus controls from the multi-site dataset with 70% accuracy, identifying patterns of hypo-connectivity between anterior and posterior brain regions in ASD individuals. The areas of the brain that most contributed to differentiating ASD from controls according to the deep learning model are also identified.
Evaluation of Default Mode Network In Mild Cognitive Impairment and Alzheimer...CSCJournals
Although progressive functional brain network disorders has been one of the indication of Alzheimer's disease, The current research on aging and dementia focus on diagnostics of the cognitive changes of normal aging and Alzheimer Disease (AD), these changes known as Mild Cognitive Impairment (MCI). The default mode network (DMN) is a network of interacting brain regions known to have activity highly correlated with each other and distinct from other networks in the brain, the default mode network is active during passive rest and consists of a set of brain areas that are tightly functionally connected and distinct from other systems within the brain. Anatomically, the DMN includes the posterior cingulated cortex (PCC), dorsal and ventral medial prefrontal cortex, the lateral parietal cortex, and the medial temporal lobes. DMN involves multiple anatomical networks that converge on cortical hubs, such as the PCC, ventral medial prefrontal, and inferior parietal cortices. The aim of this study was to evaluate the default mode network functional connectivity in MCI patients. While no treatments are recommended for MCI currently, Mild Cognitive Impairment is becoming a very important subject for researchers and deserves more recognition and further study, In order to increase the ability to recognize earlier symptoms of Alzheimer's disease.
A novel framework for efficient identification of brain cancer region from br...IJECEIAES
Diagnosis of brain cancer using existing imaging techniques, e.g., Magnetic Resonance Imaging (MRI) is shrouded with various degrees of challenges. At present, there are very few significant research models focusing on introducing some novel and unique solutions towards such problems of detection. Moreover, existing techniques are found to have lesser accuracy as compared to other detection schemes. Therefore, the proposed paper presents a framework that introduces a series of simple and computationally cost-effective techniques that have assisted in leveraging the accuracy level to a very higher degree. The proposed framework takes the input image and subjects it to non-conventional segmentation mechanism followed by optimizing the performance using directed acyclic graph, Bayesian Network, and neural network. The study outcome of the proposed system shows the significantly higher degree of accuracy in detection performance as compared to frequently existing approaches.
MOST READ ARTICLES IN ARTIFICIAL INTELLIGENCE - International Journal of Arti...gerogepatton
The International Journal of Artificial Intelligence & Applications (IJAIA) is a bi monthly open access peer-reviewed journal that publishes articles which contribute new results in all areas of the Artificial Intelligence & Applications (IJAIA). It is an international journal intended for professionals and researchers in all fields of AI for researchers, programmers, and software and hardware manufacturers. The journal also aims to publish new attempts in the form of special issues on emerging areas in Artificial Intelligence and applications.
During seizures, different types of communication between different parts of the brain are characterized by many state of the art connectivity measures. We propose to employ a set of undirected (spectral matrix, the inverse of the spectral matrix, coherence, partial coherence, and phase-locking value) and directed features (directed coherence, the partial directed coherence) to detect seizures using a deep neural network. Taking our data as a sequence of ten sub-windows, an optimal deep sequence learning architecture using attention, CNN, BiLstm, and fully connected neural networks is designed to output the detection label and the relevance of the features. The relevance is computed using the weights of the model in the activation values of the receptive fields at a particular layer. The best model resulted in 97.03% accuracy using balanced MIT-BIH data subset. Finally, an analysis of the relevance of the features is reported.
Integrated Modelling Approach for Enhancing Brain MRI with Flexible Pre-Proce...IJECEIAES
The assurance of an information quality of the input medical image is a critical step to offer highly precise and reliable diagnosis of clinical condition in human. The importance of such assurance becomes more while dealing with important organ like brain. Magnetic Resonance Imaging (MRI) is one of the most trusted mediums to investigate brain. Looking into the existing trends of investigating brain MRI, it was observed that researchers are more prone to investigate advanced problems e.g. segmentation, localization, classification, etc considering image dataset. There is less work carried out towards image preprocessing that potential affects the later stage of diagnosing. Therefore, this paper introduces a novel model of integrated image enhancement algorithm that is capable of solving different and discrete problems of performing image pre-processing for offering highly improved and enhanced brain MRI. The comparative outcomes exhibit the advantage of its simplistic implemetation strategy.
IRJET- Effect of Principal Component Analysis in Lung Cancer Detection us...IRJET Journal
This document discusses using machine learning techniques to detect lung cancer from data more accurately and quickly. It summarizes that lung cancer is a leading cause of cancer death worldwide. Current diagnosis methods like CT scans can detect small lung lesions but take time. The document proposes using machine learning algorithms on lung cancer data to classify and detect cancer, aiming to diagnose it earlier. It discusses collecting lung cancer data from a repository and filtering/classifying it using methods like J48, principal component analysis, and comparing results to find the best detection method.
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
The document describes a proposed method for designing a classifier to detect diabetes using neural networks and the fuzzy k-nearest neighbor algorithm. The method would train a neural network using the fuzzy k-NN algorithm on a server and use it to classify diabetes on a mobile device for convenience. Analysis in WEKA showed the method achieved around 72-74% accuracy on 10-fold cross validation of a diabetes dataset with attributes removed. The proposed method is expected to perform comparably to support vector machines with less complexity.
IRJET- Human Heart Disease Prediction using Ensemble Learning and Particle Sw...IRJET Journal
The document discusses using ensemble learning and particle swarm optimization to predict heart disease. It aims to select the machine learning algorithm from among AdaBoost, gradient descent, random forest, decision tree and Gaussian naive Bayes that achieves the highest accuracy. Particle swarm optimization is used to select important predictive features from the dataset. The proposed approach uses AdaBoost and particle swarm optimization to achieve an accuracy of 84.88% in predicting heart disease, with an error rate of 4%.
CLASSIFICATION OF ALZHEIMER USING fMRI DATA AND BRAIN NETWORKcscpconf
Since the mid of 1990s, functional connectivity study using fMRI (fcMRI) has drawn increasing
attention of neuroscientists and computer scientists, since it opens a new window to explore
functional network of human brain with relatively high resolution. BOLD technique provides
almost accurate state of brain. Past researches prove that neuro diseases damage the brain
network interaction, protein- protein interaction and gene-gene interaction. A number of
neurological research paper also analyse the relationship among damaged part. By
computational method especially machine learning technique we can show such classifications.
In this paper we used OASIS fMRI dataset affected with Alzheimer’s disease and normal
patient’s dataset. After proper processing the fMRI data we use the processed data to form
classifier models using SVM (Support Vector Machine), KNN (K- nearest neighbour) & Naïve
Bayes. We also compare the accuracy of our proposed method with existing methods. In future,
we will other combinations of methods for better accuracy.
Life is the most precious gift to man and safeguarding this gift is of utmost importance.With
increasing number of diseases and fast paced lives, people have less time to look after themselves and
their family members or to even visit the doctor for regular check-ups.Our E-Health patient
monitoring system can remotely monitor the health of the patients and intimate the doctor of critical
conditions without human intervention. Some of the existing E-Health systems include telemedicine
network for Francophone African countries (RAFT) and LOBIN. RAFT is implemented in java and
uses asymmetric public – private key encryption, however it is expensive, does not support mobility
and is not a context aware system. LOBIN is a hardware/software platform to locate and monitor a set
of physiological parameters and context parameters of several patients within hospital facilities.
Although it is a context aware system it cannot handle high and concurrent data traffic load.
To overcome the above flaws, our proposed system puts forward an idea of patient monitoring
using various knowledge based techniques like K-means clustering, Gaussian kernel function, ANN
and Fuzzy inference engine. In our project we intend to do remote patient health monitoring in which
we will be using three-four machines which will send various sensed health parameters to the
centralised server that will make clusters of the sensed health parameters based on criticality of the
health condition. Then depending upon clusters formed and on comparison with the threshold values
appropriate reports will be generated and send to the doctors and caretakers.
TOP 1 CITED PAPER - International Journal of Artificial Intelligence & Appli...gerogepatton
The cuckoo search algorithm is a recently developed meta-heuristic optimization algorithm, which is suitable for solving optimization problems. To enhance the accuracy and convergence rate of this algorithm, an improved cuckoo search algorithm is proposed in this paper. Normally, the parameters of the cuckoo search are kept constant. This may lead to decreasing the efficiency of the algorithm. To cope with this issue, a proper strategy for tuning the cuckoo search parameters is presented. Then, it is employed for training feedforward neural networks for two benchmark classification problems. Finally, the performance of the proposed algorithm is compared with that of the standard cuckoo search. Simulation results demonstrate the effectiveness of the proposed algorithm.
An efficient feature selection algorithm for health care data analysisjournalBEEI
Diabete is a silent killer, which will slowly kill the person if it goes undetected. The existing system which uses F-score method and K-means clustering of checking whether a person has diabetes or not are 100% accurate, and anything which isn't a 100% is not acceptable in the medical field, as it could cost the lives of many people. Our proposed system aims at using some of the best features of the existing algorithms to predict diabetes, and combine these and based on these features; This research work turns them into a novel algorithm, which will be 100% accurate in its prediction. With the surge in technological advancements, we can use data mining to predict when a person would be diagnosed with diabetes. Specifically, we analyze the best features of chi-square algorithm and advanced clustering algorithm (ACA). This research work is done using the Pima Indian Diabetes dataset provided by National Institutes of Diabetes and Digestive and Kidney Diseases. Using classification theorems and methods we can consider different factors like age, BMI, blood pressure and the importance given to these attributes overall, and singles these attributes out, and use them for the prediction of diabetes.
Performance analysis of data mining algorithms with neural networkIAEME Publication
The document summarizes research combining neural networks with three data mining algorithms (CHARM, Top K Rules, and CM-SPAM) to improve data mining results. It first provides background on data mining and classification problems. It then discusses artificial neural networks and how they are trained. Next, it outlines how the three algorithms (CHARM, Top K Rules, CM-SPAM) can be integrated with neural networks for association rule mining and sequential pattern mining. The overall goal is to leverage neural networks to generate more accurate and useful patterns from large datasets.
Delineation of techniques to implement on the enhanced proposed model using d...ijdms
In post genomic era with the advent of new technologies a huge amount of complex molecular data are
generated with high throughput. The management of this biological data is definitely a challenging task
due to complexity and heterogeneity of data for discovering new knowledge. Issues like managing noisy
and incomplete data are needed to be dealt with. Use of data mining in biological domain has made its
inventory success. Discovering new knowledge from the biological data is a major challenge in data
mining technique. The novelty of the proposed model is its combined use of intelligent techniques to classify
the protein sequence faster and efficiently. Use of FFT, fuzzy classifier, String weighted algorithm, gram
encoding method, neural network model and rough set classifier in a single model and in an appropriate
place can enhance the quality of the classification system .Thus the primary challenge is to identify and
classify the large protein sequences in a very fast and easy but intellectual way to decrease the time
complexity and space complexity.
Early Identification of Diseases Based on Responsible Attribute using Data Mi...IRJET Journal
This document describes a proposed method for early identification of diseases using data mining and classification techniques. It begins with an introduction to classification and discusses how it is commonly used in healthcare for tasks like predicting patient risk levels. It then reviews related literature applying classification methods to diseases like heart disease and diabetes. The document outlines the problem of selecting the best classification technique for a given healthcare dataset. It proposes an architecture and method for disease prediction that assigns recommended values to attributes and classifies unknown data based on calculating totals. The method is experimentally analyzed using a heart disease dataset, and its accuracy is compared to Bayesian classification. In conclusion, the proposed method seeks to reduce attributes and complexity while accurately classifying patient data for early disease identification.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Correlation of artificial neural network classification and nfrs attribute fi...eSAT Journals
Abstract
Mostly 5 to 15% of the women in the stage of reproduction face the disease called Polycystic Ovarian Syndrome (PCOS) which is the multifaceted, heterogeneous and complex. The long term consequences diseases like endometrial hyperplasia, type 2 diabetes mellitus and coronary disease are caused by the polycystic ovaries, chronic anovulation and hyperandrogenism are characterized with the resistance of insulin and the hypertension, abdominal obesity and dyslipidemia and hyperinsulinemia are called as Metabolic syndrome (frequent metabolic traits) The above cause the common disease called Anovulatory infertility. Computer based information along with advanced Data mining techniques are used for appropriate results. Classification is a classic data mining task, with roots in machine learning. Naïve Bayesian, Artificial Neural Network, Decision Tree, Support Vector Machines are the classification tasks in the data mining. Feature selection methods involve generation of the subset, evaluation of each subset, criteria for stopping the search and validation procedures. The characteristics of the search method used are important with respect to the time efficiency of the feature selection methods. PCA (Principle Component Analysis), Information gain Subset Evaluation, Fuzzy rough set evaluation, Correlation based Feature Selection (CFS) are some of the feature selection techniques, greedy first search, ranker etc are the search algorithms that are used in the feature selection. In this paper, a new algorithm which is based on Fuzzy neural subset evaluation and artificial neural network is proposed which reduces the task of classification and feature selection separately. This algorithm combines the neural fuzzy rough subset evaluation and artificial neural network together for the better performance than doing the tasks separately.
Keywords: ANN, SVM, PCA, CFS
This document summarizes research on using knowledge-based systems and soft computing techniques in neuroscience. It provides an abstract and literature review on several expert systems and computational models that have been developed to diagnose neurological disorders like strokes and epilepsy. The literature review discusses systems that use fuzzy logic, neural networks, and case-based reasoning to classify symptoms and arrive at diagnoses. The goal of the research discussed is to develop innovative IT solutions to help doctors in rural areas diagnose and treat neurological patients.
This document discusses numerical methods for solving differential equations. It introduces direction fields, which provide a graphical approach to studying solutions, and Euler's method, which provides a numerical approach. Euler's method works by approximating the slope of the tangent line at each step using small step sizes to iteratively calculate successive approximations of the solution.
This document provides an overview of perturbation techniques for analyzing heat transfer problems. It discusses several objectives: to demonstrate the usefulness of perturbation techniques; to assist unfamiliar readers in understanding the techniques; and to show how the techniques are applied to specific problems. The document then reviews various perturbation methods - regular perturbation method, method of strained coordinates, method of matched asymptotic expansions, and method of extended perturbation series. It also discusses limitations and advantages of perturbation methods.
This document discusses Euler's method for numerically approximating solutions to first-order initial value problems. It begins by introducing Euler's method and its use of tangent lines to approximate the solution curve. Examples are provided to illustrate the application of the method and analyze errors compared to exact solutions. The discussion notes that Euler's method relies on a sequence of tangent lines to different solution curves, so accuracy depends on whether the family of solutions is converging or diverging. It emphasizes the importance of error bounds when exact solutions are unknown.
On finite differences, interpolation methods and power series expansions in i...PlusOrMinusZero
The document discusses concepts in numerical analysis developed by ancient Indian mathematicians including Aryabhata, Brahmagupta, Bhaskara I, and Madhava. It describes Aryabhata's difference table for sines, which was actually the first table of differences rather than values. It explains Brahmagupta's second-order interpolation formula, making him the first to develop such an interpolation method. It also outlines Bhaskara I's rational polynomial approximation to calculate sines and Madhava's work with power series expansions.
Euler's method is a numerical approach for approximating solutions to differential equations. It works by taking an initial condition and using the tangent line at that point to take a small step to a new point. This process is repeated, using the new point as the initial condition. The smaller the step size, the more accurate the approximation will be. An example walks through applying Euler's method to the differential equation y' = x + y with an initial condition of y(0) = 2 using 10 steps of size 0.1.
Euler's Method is an algorithm for numerically solving differential equations by approximating their solutions using small discrete time steps. It involves choosing an initial point, calculating the slope at that point, using the slope to take a small step to a new point, recalculating the slope at the new point, and repeating. The document then transitions to providing instructions for implementing Euler's Method on a calculator.
This document describes GPU-Euler, a method for genome sequence assembly using general-purpose graphics processing units (GPPUs). It motivates the work by discussing challenges in genome assembly due to large data sizes and previous techniques. The method section explains parallel Eulerian assembly on a de Bruijn graph using GPUs and analyzes time complexity. Results from testing on real datasets are also presented.
1. Process scheduling involves managing processes in three states: ready, running, and waiting. Processes transition between these states due to actions of the process or external events.
2. There are several scheduling policies for selecting the next process to run, including first-come-first-served (FCFS) and shortest-job-first (SJF).
3. FCFS selects processes in the order they arrive in the ready queue. SJF selects the process with the shortest estimated completion time to minimize average response time but requires knowing process runtimes.
A person sent an email at 8:49 AM and received a response at 9:41 AM agreeing to meet at 10:42 AM. They then sent another email at 9:42 AM with an attachment and received a response at 10:34 AM acknowledging receipt of the attachment. A third email was also sent to schedule another meeting for an unspecified time.
This document discusses Euler's method, a numerical technique for solving differential equations, and provides an example of using the method to solve an equation on the interval [0,1] over 4 steps of size 0.25. It also prompts the reader to repeat the process using 10 steps of size 0.1.
This document presents a discontinuous finite element method for solving the compressible Euler equations. It discusses why a discontinuous finite element approach is useful, provides background on the method, and describes the weak formulation, slope limitation technique, Riemann solver, and implicit time integration used. Numerical experiments applying the method to test cases like a shock reflection problem and hypersonic flow over a double ellipse are presented and show the method can accurately capture shocks and flows over complex geometries.
Introduction to Numerical Methods for Differential Equationsmatthew_henderson
The document introduces the Euler method for numerically approximating solutions to initial value problems (IVPs). It defines IVPs and shows an example. The Euler method uses the derivative approximation y(x+h) ≈ y(x) + hf(x,y) to march forward in small steps h to construct a table of approximate y-values. For the example IVP, the Euler method produces values that begin to resemble the exact solution. While not exact, the errors are small. The method is derived from the definition of the derivative and works because it approximates the tangent line at each step.
This document summarizes a numerical analysis project using the Euler method to simulate projectile motion in MATLAB. It contains sections on the Euler method, applying it to physical systems like projectile motion, implementing it theoretically and in MATLAB, and presenting the output and conclusions. The group members are listed and the document contains the equations of motion for a projectile and details on setting up and running the simulation in MATLAB.
1. The document discusses ordinary differential equations and provides definitions and examples of separable, homogeneous, exact, linear, and Bernoulli equations.
2. Methods for solving first order differential equations are presented, including finding acceptable solutions in terms of p, y, or x. Lagrange's and Clairaut's equations are also discussed.
3. Higher order and degree differential equations can be solved using methods like Lagrange's equation, Clairaut's equation, or solving the linear homogeneous and non-homogeneous forms with constant coefficients.
This document introduces differential equations, including definitions of ordinary and partial differential equations. Ordinary differential equations relate a function to one independent variable and its derivatives, while partial differential equations relate a function of two or more variables to its partial derivatives. The document discusses the order and degree of differential equations, and explains that the order is the highest derivative and the degree is the highest order of the derivative. It also defines the solution to a differential equation, initial value problems, and the difference between general and particular solutions.
The document discusses iterative methods for solving systems of linear equations, including the Jacobi, Gauss-Seidel, and Gauss-Seidel relaxation methods. The Jacobi method works by rewriting the system in a form where the diagonal entries are isolated and computing successive approximations. The Gauss-Seidel method similarly computes approximations but uses the most recent values available at each step. Relaxation improves the Gauss-Seidel method's convergence by taking a weighted average of the current and previous iterations' results. Examples demonstrate applying the different methods to compute solutions.
The document discusses numerical methods and provides examples of how to implement them in Smalltalk. It covers frameworks for iterative processes, Newton's method for finding zeros, eigenvalue and eigenvector computation using the Jacobi method, and cluster analysis. Code examples and class diagrams are provided.
This document discusses iterative methods for solving systems of equations. It introduces the Jacobi iteration method and the Successive Over-Relaxation (SOR) method. SOR can accelerate the convergence compared to Jacobi by introducing an optimal relaxation parameter. Pseudocode is provided to implement SOR to iteratively solve a system of equations until the solution converges within a specified tolerance.
Perturbation theory allows approximations of quantum systems where exact solutions cannot be easily determined. It involves splitting the Hamiltonian into known and perturbative terms. For the helium atom, the zero-order approximation treats it as two independent hydrogen atoms, yielding the wrong energy. The first-order approximation includes repulsion between electrons, giving a better but still incorrect energy. Variational theory provides an energy always greater than or equal to the actual energy.
Hybrid deep learning model using recurrent neural network and gated recurrent...IJECEIAES
This paper proposes a new hybrid deep learning model for heart disease prediction using recurrent neural network (RNN) with the combination of multiple gated recurrent units (GRU), long short-term memory (LSTM) and Adam optimizer. This proposed model resulted in an outstanding accuracy of 98.6876% which is the highest in the existing model of RNN. The model was developed in Python 3.7 by integrating RNN in multiple GRU that operates in Keras and Tensorflow as the backend for deep learning process, supported by various Python libraries. The recent existing models using RNN have reached an accuracy of 98.23% and deep neural network (DNN) has reached 98.5%. The common drawbacks of the existing models are low accuracy due to the complex build-up of the neural network, high number of neurons with redundancy in the neural network model and imbalance datasets of Cleveland. Experiments were conducted with various customized model, where results showed that the proposed model using RNN and multiple GRU with synthetic minority oversampling technique (SMOTe) has reached the best performance level. This is the highest accuracy result for RNN using Cleveland datasets and much promising for making an early heart disease prediction for the patients.
An Efficient PSO Based Ensemble Classification Model on High Dimensional Data...ijsc
This summary provides the high-level information from the document in 3 sentences:
The document proposes a Particle Swarm Optimization (PSO) based ensemble classification model to improve classification of high-dimensional biomedical datasets. It develops an optimized PSO technique to select optimal features and initialize weights for base classifiers in the ensemble model. Experimental results on microarray datasets show the proposed model achieves higher accuracy, true positive rate, and lower error rate compared to traditional feature selection based classification models.
Deep Learning-Based Approach for Thyroid Dysfunction PredictionIRJET Journal
This document discusses using a deep learning artificial neural network (ANN) model to predict thyroid dysfunction based on patient data. It begins with an introduction to thyroid dysfunction and the need for accurate diagnosis. It then provides background on deep learning, ANNs, and relevant previous research applying machine learning to thyroid problems. The paper describes developing an ANN model using a dataset of 3772 patient records with 28 features. The ANN achieved 98.8% accuracy in identifying thyroid dysfunction. The findings demonstrate ANNs can reliably diagnose thyroid dysfunction early. However, more research is needed to validate the approach with more diverse patient populations. Overall, the results suggest machine learning and ANN models show promise for diagnosing thyroid dysfunction.
Comparative study of artificial neural network based classification for liver...Alexander Decker
This document presents a comparative study of different artificial neural network (ANN) classification models for predicting liver disease in patients. It evaluates ANN models like backpropagation, radial basis function, self-organizing map, and support vector machine on liver patient data. The support vector machine model achieved the highest accuracy at 99.76% for men data and 97.7% for women data, indicating it may be effective as a predictive tool for liver patients.
Model of Differential Equation for Genetic Algorithm with Neural Network (GAN...Sarvesh Kumar
The work is carried on the application of differential equation (DE) and its computational technique of genetic algorithm and neural (GANN) in C#, which is frequently used in globalised world by human wings. Diagrammatical and flow chart presentation is the major concerned for easy undertaking of these two concepts with indication of its present and future application is the new initiative taken in this paper along with computational approaches in C#. Little observation has been also pointed during working, functioning and development process of above algorithm in C# under given boundary value condition of DE for genetic and neural. Operations of fitness function and Genetic operations were completed for behavioural transmission of chromosome.
SWARM OPTIMIZED MODULAR NEURAL NETWORK BASED DIAGNOSTIC SYSTEM FOR BREAST CAN...ijscai
The document describes a modular neural network approach optimized by particle swarm optimization for breast cancer diagnosis. The approach uses a modular neural network with several independent neural network experts that analyze input data individually and provide outputs that are combined by an integrator. Particle swarm optimization is used to determine optimal connections for each expert neural network during training. The optimized modular neural network is then used to classify breast cancer samples as cancerous or non-cancerous, demonstrating better diagnostic ability than traditional methods.
This document summarizes a study that used artificial neural networks (ANN) to segment MRI brain images into gray matter, white matter, and cerebrospinal fluid in order to analyze and classify three neurodegenerative diseases: Alzheimer's disease, Parkinson's disease, and epilepsy. Real MRI data from patients with these diseases was preprocessed, features were extracted using Gabor filters, and ANN was used to classify tissues. The ANN approach achieved 96.13% accuracy for Alzheimer's classification, 93.26% for Parkinson's, and 91.33% for epilepsy. The study demonstrated that ANN is effective for automated brain tissue segmentation and shows potential for assisting in diagnosis of neurological diseases.
This document discusses neural networks and their applications in signal processing. It begins with an introduction to neural networks and their biological inspiration. Applications of neural networks in signal processing are then outlined, including EEG, EMG, medical diagnosis, and more. The document reviews two papers on deep learning applications in biomedical fields and identifies challenges around model building and interpretability. It sets research objectives to optimize mathematical models and improve result interpretation. In conclusion, deep learning has advanced pattern recognition but challenges remain around data availability and model methodology.
IRJET- Prediction of Autism Spectrum Disorder using Deep Learning: A SurveyIRJET Journal
This document summarizes research on using deep learning techniques to predict Autism Spectrum Disorder (ASD). It first provides background on ASD, describing it as a developmental disorder that impairs social communication and interaction. It then reviews related work applying machine learning to ASD prediction and diagnosis. The proposed system would use a deep learning model trained on an AQ10 dataset of behavioral questions to predict ASD severity. It would employ a multi-layer feedforward neural network optimized with the Adam gradient descent algorithm. The goal is to develop an accurate, fast and low-cost mobile application to help diagnose ASD at an early stage.
An efficient convolutional neural network-based classifier for an imbalanced ...IAESIJAI
Imbalanced datasets pose a major challenge for the researchers while addressing machine learning tasks. In these types of datasets, samples of different classes are not in equal proportion rather the gap between the numbers of individual class samples is significantly large. Classification models perform better for datasets having equal proportion of data tuples in both the classes. But, in reality, the medical image datasets are skewed and hence are not always suitable for a model to achieve improved classification performance. Therefore, various techniques have been suggested in the literature to overcome this challenge. This paper applies oversampling technique on an imbalanced dataset and focuses on a customized convolutional neural network model that classifies the images into two categories: diseased and non-diseased. Outcome of the proposed model can assist the health experts in the detection of oral cancer. The proposed model exhibits 99% accuracy after data augmentation. Performance metrics such as precision, recall and F1-score values are very close to 1. In addition, statistical test is performed to validate the statistical significance of the model. It has been found that the proposed model is an optimised classifier in terms of number of network layers and number of neurons.
Human heart can be described as a compound body organ contains muscles together with
biological nerves. Human heart pumps nearly 5 litre of blood in the body providing the human body
with renewed material [6]. If operation of heart is not proper, it will affect the other body parts of
human such as brain, kidney etc. various study revealed that heart disease have emerged as the
number one killer in world. About 25 per cent of deaths in the age group of 25-69 years occur
because of heart disease. There are number of factors, which increase the risk of heart disease such
as smoking, cholesterol, high blood pressure, obesity and low physical exercise etc. The World
Health Organisation (WHO) has estimated that 12 million deaths occur worldwide, every year due to
heart diseases. WHO estimated by 2030, almost 23.6 million people will die due to Heart
disease.Cardiovascular disease includes coronary heart disease (CHD), cerebrovascular disease
(stroke), Hypertensive heart disease, congenital heart disease, peripheral artery disease, rheumatic
heart disease, inflammatory heart disease [5].
Health Care Application using Machine Learning and Deep LearningIRJET Journal
This document presents a study on using machine learning and deep learning techniques for healthcare applications like disease prediction. It discusses algorithms like logistic regression, decision trees, random forests, SVMs and deep learning models like VGG16 applied on various disease datasets. For diabetes, heart and liver diseases, ML algorithms were used while CNN models were used for malaria and pneumonia image datasets. Random forest achieved the highest accuracy of 84.81% for diabetes prediction, SVM had 81.57% accuracy for heart disease and random forest was best at 83.33% for liver disease. The VGG16 model attained accuracies of 94.29% and 95.48% for malaria and pneumonia respectively. The study aims to develop an intelligent healthcare application for predicting different
Predictive Data Mining with Normalized Adaptive Training Method for Neural Ne...IJERDJOURNAL
Abstract:- Predictive data mining is an upcoming and fast-growing field and offers a competitive edge for the benefit of organization. In recent decades, researchers have developed new techniques and intelligent algorithms for predictive data mining. In this research paper, we have proposed a novel training algorithm for optimizing neural networks for prediction purpose and to utilize it for the development of prediction models. Models developed in MATLAB Neural Network Toolbox have been tested for insurance datasets taken from a live data warehouse. A comparative study of the proposed algorithm with other popular first and second order algorithms has been presented to judge the predictive accuracy of the suggested technique. Various graphs have been presented to analyse the convergence behaviour of different algorithms towards point of minimum error.
Prediction of Heart Disease Using Machine Learning and Deep Learning Techniques.IRJET Journal
This document discusses using machine learning and deep learning techniques to predict heart disease. It analyzes four algorithms - Adaboost Classifier, ExtraTrees Classifier, Convolutional Neural Network (CNN), and Multilayer Perceptron (MLP) using a dataset of 1190 heart disease cases. CNN achieved the highest prediction accuracy of 98.28%, outperforming the other algorithms. The study concludes CNN is effective for heart disease prediction and identifying risks early could help improve outcomes. Future work may explore using fewer clinical parameters and focusing on Asian heart disease datasets.
Iganfis Data Mining Approach for Forecasting Cancer Threatsijsrd.com
This document proposes a new approach called IGANFIS that combines information gain (IG) and adaptive neuro fuzzy inference system (ANFIS) to classify cancer threats from medical data. IG is used to select the most important cancer features from the data to reduce dimensionality before being input to ANFIS. ANFIS then trains on the selected features to build a fuzzy inference system for cancer diagnosis and prediction. The approach is tested on breast cancer datasets and achieves higher classification accuracy compared to other methods.
The document discusses a hybrid CNN and LSTM network for heart disease prediction. It begins by noting heart disease is a leading cause of death worldwide and that traditional machine learning methods have achieved only 65-85% accuracy in prediction. The proposed method uses a convolutional neural network to extract features from heart disease data, and then an LSTM network to classify the data as normal or abnormal based on those features. When tested on heart disease data, the hybrid model achieved 89% accuracy, outperforming other machine learning algorithms like SVM, Naive Bayes and decision trees.
PREDICTION OF MALIGNANCY IN SUSPECTED THYROID TUMOUR PATIENTS BY THREE DIFFER...cscpconf
This document compares three classification methods - artificial neural networks, decision trees, and logistic regression - for predicting malignancy in thyroid tumor patients using a clinical dataset. It describes each method and applies them to a dataset of 259 thyroid tumor patients. The artificial neural network achieved 98% accuracy on the training set and 92% on the validation set. The decision tree method used 150 cases to build a model and achieved 86% accuracy. Logistic regression analysis resulted in 88% accuracy. The artificial neural network was able to accurately predict malignancy and identified important attributes like multiple nodules and family cancer history.
This document compares the performance of various machine learning and classification algorithms, including neural networks, support vector machines, Naive Bayes, decision trees, and decision stumps. It analyzes these algorithms using a dataset of annual and monthly temperature data from India over 1901-2012. The analysis is conducted in RapidMiner and finds that neural networks and support vector machines can effectively model complex nonlinear relationships to predict temperature. Neural networks achieved reasonably accurate predictions of annual temperature compared to the original data values. The document concludes by comparing the performance of the different algorithms.
IRJET- Overview of Artificial Neural Networks Applications in Groundwater...IRJET Journal
This document provides an overview of applications of artificial neural networks (ANNs) in groundwater studies. It discusses how ANNs mimic the human brain and can be used to model complex groundwater systems. It then summarizes several ways that ANNs have been successfully applied in groundwater hydrodynamics, water resources management, time series forecasting, and other areas. These include using ANNs to model coastal aquifers, predict groundwater levels, forecast water quality, and combine ANNs with other models for improved results. In summary, ANNs are a powerful tool for solving hydrogeological problems and have been widely used in groundwater research.
Similar to Successive iteration method for reconstruction of missing data (20)
Submission Deadline: 30th September 2022
Acceptance Notification: Within Three Days’ time period
Online Publication: Within 24 Hrs. time Period
Expected Date of Dispatch of Printed Journal: 5th October 2022
MODELING AND ANALYSIS OF SURFACE ROUGHNESS AND WHITE LATER THICKNESS IN WIRE-...IAEME Publication
White layer thickness (WLT) formed and surface roughness in wire electric discharge turning (WEDT) of tungsten carbide composite has been made to model through response surface methodology (RSM). A Taguchi’s standard Design of experiments involving five input variables with three levels has been employed to establish a mathematical model between input parameters and responses. Percentage of cobalt content, spindle speed, Pulse on-time, wire feed and pulse off-time were changed during the experimental tests based on the Taguchi’s orthogonal array L27 (3^13). Analysis of variance (ANOVA) revealed that the mathematical models obtained can adequately describe performance within the parameters of the factors considered. There was a good agreement between the experimental and predicted values in this study.
A STUDY ON THE REASONS FOR TRANSGENDER TO BECOME ENTREPRENEURSIAEME Publication
The study explores the reasons for a transgender to become entrepreneurs. In this study transgender entrepreneur was taken as independent variable and reasons to become as dependent variable. Data were collected through a structured questionnaire containing a five point Likert Scale. The study examined the data of 30 transgender entrepreneurs in Salem Municipal Corporation of Tamil Nadu State, India. Simple Random sampling technique was used. Garrett Ranking Technique (Percentile Position, Mean Scores) was used as the analysis for the present study to identify the top 13 stimulus factors for establishment of trans entrepreneurial venture. Economic advancement of a nation is governed upon the upshot of a resolute entrepreneurial doings. The conception of entrepreneurship has stretched and materialized to the socially deflated uncharted sections of transgender community. Presently transgenders have smashed their stereotypes and are making recent headlines of achievements in various fields of our Indian society. The trans-community is gradually being observed in a new light and has been trying to achieve prospective growth in entrepreneurship. The findings of the research revealed that the optimistic changes are taking place to change affirmative societal outlook of the transgender for entrepreneurial ventureship. It also laid emphasis on other transgenders to renovate their traditional living. The paper also highlights that legislators, supervisory body should endorse an impartial canons and reforms in Tamil Nadu Transgender Welfare Board Association.
BROAD UNEXPOSED SKILLS OF TRANSGENDER ENTREPRENEURSIAEME Publication
Since ages gender difference is always a debatable theme whether caused by nature, evolution or environment. The birth of a transgender is dreadful not only for the child but also for their parents. The pain of living in the wrong physique and treated as second class victimized citizen is outrageous and fully harboured with vicious baseless negative scruples. For so long, social exclusion had perpetuated inequality and deprivation experiencing ingrained malign stigma and besieged victims of crime or violence across their life spans. They are pushed into the murky way of life with a source of eternal disgust, bereft sexual potency and perennial fear. Although they are highly visible but very little is known about them. The common public needs to comprehend the ravaged arrogance on these insensitive souls and assist in integrating them into the mainstream by offering equal opportunity, treat with humanity and respect their dignity. Entrepreneurship in the current age is endorsing the gender fairness movement. Unstable careers and economic inadequacy had inclined one of the gender variant people called Transgender to become entrepreneurs. These tiny budding entrepreneurs resulted in economic transition by means of employment, free from the clutches of stereotype jobs, raised standard of living and handful of financial empowerment. Besides all these inhibitions, they were able to witness a platform for skill set development that ignited them to enter into entrepreneurial domain. This paper epitomizes skill sets involved in trans-entrepreneurs of Thoothukudi Municipal Corporation of Tamil Nadu State and is a groundbreaking determination to sightsee various skills incorporated and the impact on entrepreneurship.
DETERMINANTS AFFECTING THE USER'S INTENTION TO USE MOBILE BANKING APPLICATIONSIAEME Publication
The banking and financial services industries are experiencing increased technology penetration. Among them, the banking industry has made technological advancements to better serve the general populace. The economy focused on transforming the banking sector's system into a cashless, paperless, and faceless one. The researcher wants to evaluate the user's intention for utilising a mobile banking application. The study also examines the variables affecting the user's behaviour intention when selecting specific applications for financial transactions. The researcher employed a well-structured questionnaire and a descriptive study methodology to gather the respondents' primary data utilising the snowball sampling technique. The study includes variables like performance expectations, effort expectations, social impact, enabling circumstances, and perceived risk. Each of the aforementioned variables has a major impact on how users utilise mobile banking applications. The outcome will assist the service provider in comprehending the user's history with mobile banking applications.
ANALYSE THE USER PREDILECTION ON GPAY AND PHONEPE FOR DIGITAL TRANSACTIONSIAEME Publication
Technology upgradation in banking sector took the economy to view that payment mode towards online transactions using mobile applications. This system enabled connectivity between banks, Merchant and user in a convenient mode. there are various applications used for online transactions such as Google pay, Paytm, freecharge, mobikiwi, oxygen, phonepe and so on and it also includes mobile banking applications. The study aimed at evaluating the predilection of the user in adopting digital transaction. The study is descriptive in nature. The researcher used random sample techniques to collect the data. The findings reveal that mobile applications differ with the quality of service rendered by Gpay and Phonepe. The researcher suggest the Phonepe application should focus on implementing the application should be user friendly interface and Gpay on motivating the users to feel the importance of request for money and modes of payments in the application.
VOICE BASED ATM FOR VISUALLY IMPAIRED USING ARDUINOIAEME Publication
The prototype of a voice-based ATM for visually impaired using Arduino is to help people who are blind. This uses RFID cards which contain users fingerprint encrypted on it and interacts with the users through voice commands. ATM operates when sensor detects the presence of one person in the cabin. After scanning the RFID card, it will ask to select the mode like –normal or blind. User can select the respective mode through voice input, if blind mode is selected the balance check or cash withdraw can be done through voice input. Normal mode procedure is same as the existing ATM.
IMPACT OF EMOTIONAL INTELLIGENCE ON HUMAN RESOURCE MANAGEMENT PRACTICES AMONG...IAEME Publication
There is increasing acceptability of emotional intelligence as a major factor in personality assessment and effective human resource management. Emotional intelligence as the ability to build capacity, empathize, co-operate, motivate and develop others cannot be divorced from both effective performance and human resource management systems. The human person is crucial in defining organizational leadership and fortunes in terms of challenges and opportunities and walking across both multinational and bilateral relationships. The growing complexity of the business world requires a great deal of self-confidence, integrity, communication, conflict and diversity management to keep the global enterprise within the paths of productivity and sustainability. Using the exploratory research design and 255 participants the result of this original study indicates strong positive correlation between emotional intelligence and effective human resource management. The paper offers suggestions on further studies between emotional intelligence and human capital development and recommends for conflict management as an integral part of effective human resource management.
VISUALISING AGING PARENTS & THEIR CLOSE CARERS LIFE JOURNEY IN AGING ECONOMYIAEME Publication
Our life journey, in general, is closely defined by the way we understand the meaning of why we coexist and deal with its challenges. As we develop the "inspiration economy", we could say that nearly all of the challenges we have faced are opportunities that help us to discover the rest of our journey. In this note paper, we explore how being faced with the opportunity of being a close carer for an aging parent with dementia brought intangible discoveries that changed our insight of the meaning of the rest of our life journey.
A STUDY ON THE IMPACT OF ORGANIZATIONAL CULTURE ON THE EFFECTIVENESS OF PERFO...IAEME Publication
The main objective of this study is to analyze the impact of aspects of Organizational Culture on the Effectiveness of the Performance Management System (PMS) in the Health Care Organization at Thanjavur. Organizational Culture and PMS play a crucial role in present-day organizations in achieving their objectives. PMS needs employees’ cooperation to achieve its intended objectives. Employees' cooperation depends upon the organization’s culture. The present study uses exploratory research to examine the relationship between the Organization's culture and the Effectiveness of the Performance Management System. The study uses a Structured Questionnaire to collect the primary data. For this study, Thirty-six non-clinical employees were selected from twelve randomly selected Health Care organizations at Thanjavur. Thirty-two fully completed questionnaires were received.
Living in 21st century in itself reminds all of us the necessity of police and its administration. As more and more we are entering into the modern society and culture, the more we require the services of the so called ‘Khaki Worthy’ men i.e., the police personnel. Whether we talk of Indian police or the other nation’s police, they all have the same recognition as they have in India. But as already mentioned, their services and requirements are different after the like 26th November, 2008 incidents, where they without saving their own lives has sacrificed themselves without any hitch and without caring about their respective family members and wards. In other words, they are like our heroes and mentors who can guide us from the darkness of fear, militancy, corruption and other dark sides of life and so on. Now the question arises, if Gandhi would have been alive today, what would have been his reaction/opinion to the police and its functioning? Would he have some thing different in his mind now what he had been in his mind before the partition or would he be going to start some Satyagraha in the form of some improvement in the functioning of the police administration? Really these questions or rather night mares can come to any one’s mind, when there is too much confusion is prevailing in our minds, when there is too much corruption in the society and when the polices working is also in the questioning because of one or the other case throughout the India. It is matter of great concern that we have to thing over our administration and our practical approach because the police personals are also like us, they are part and parcel of our society and among one of us, so why we all are pin pointing towards them.
A STUDY ON TALENT MANAGEMENT AND ITS IMPACT ON EMPLOYEE RETENTION IN SELECTED...IAEME Publication
The goal of this study was to see how talent management affected employee retention in the selected IT organizations in Chennai. The fundamental issue was the difficulty to attract, hire, and retain talented personnel who perform well and the gap between supply and demand of talent acquisition and retaining them within the firms. The study's main goals were to determine the impact of talent management on employee retention in IT companies in Chennai, investigate talent management strategies that IT companies could use to improve talent acquisition, performance management, career planning and formulate retention strategies that the IT firms could use. The respondents were given a structured close-ended questionnaire with the 5 Point Likert Scale as part of the study's quantitative research design. The target population consisted of 289 IT professionals. The questionnaires were distributed and collected by the researcher directly. The Statistical Package for Social Sciences (SPSS) was used to collect and analyse the questionnaire responses. Hypotheses that were formulated for the various areas of the study were tested using a variety of statistical tests. The key findings of the study suggested that talent management had an impact on employee retention. The studies also found that there is a clear link between the implementation of talent management and retention measures. Management should provide enough training and development for employees, clarify job responsibilities, provide adequate remuneration packages, and recognise employees for exceptional performance.
ATTRITION IN THE IT INDUSTRY DURING COVID-19 PANDEMIC: LINKING EMOTIONAL INTE...IAEME Publication
Globally, Millions of dollars were spent by the organizations for employing skilled Information Technology (IT) professionals. It is costly to replace unskilled employees with IT professionals possessing technical skills and competencies that aid in interconnecting the business processes. The organization’s employment tactics were forced to alter by globalization along with technological innovations as they consistently diminish to remain lean, outsource to concentrate on core competencies along with restructuring/reallocate personnel to gather efficiency. As other jobs, organizations or professions have become reasonably more appropriate in a shifting employment landscape, the above alterations trigger both involuntary as well as voluntary turnover. The employee view on jobs is also afflicted by the COVID-19 pandemic along with the employee-driven labour market. So, having effective strategies is necessary to tackle the withdrawal rate of employees. By associating Emotional Intelligence (EI) along with Talent Management (TM) in the IT industry, the rise in attrition rate was analyzed in this study. Only 303 respondents were collected out of 350 participants to whom questionnaires were distributed. From the employees of IT organizations located in Bangalore (India), the data were congregated. A simple random sampling methodology was employed to congregate data as of the respondents. Generating the hypothesis along with testing is eventuated. The effect of EI and TM along with regression analysis between TM and EI was analyzed. The outcomes indicated that employee and Organizational Performance (OP) were elevated by effective EI along with TM.
INFLUENCE OF TALENT MANAGEMENT PRACTICES ON ORGANIZATIONAL PERFORMANCE A STUD...IAEME Publication
By implementing talent management strategy, organizations would have the option to retain their skilled professionals while additionally working on their overall performance. It is the course of appropriately utilizing the ideal individuals, setting them up for future top positions, exploring and dealing with their performance, and holding them back from leaving the organization. It is employee performance that determines the success of every organization. The firm quickly obtains an upper hand over its rivals in the event that its employees having particular skills that cannot be duplicated by the competitors. Thus, firms are centred on creating successful talent management practices and processes to deal with the unique human resources. Firms are additionally endeavouring to keep their top/key staff since on the off chance that they leave; the whole store of information leaves the firm's hands. The study's objective was to determine the impact of talent management on organizational performance among the selected IT organizations in Chennai. The study recommends that talent management limitedly affects performance. On the off chance that this talent is appropriately management and implemented properly, organizations might benefit as much as possible from their maintained assets to support development and productivity, both monetarily and non-monetarily.
A STUDY OF VARIOUS TYPES OF LOANS OF SELECTED PUBLIC AND PRIVATE SECTOR BANKS...IAEME Publication
Banking regulations act of India, 1949 defines banking as “acceptance of deposits for the purpose of lending or investment from the public, repayment on demand or otherwise and withdrawable through cheques, drafts order or otherwise”, the major participants of the Indian financial system are commercial banks, the financial institution encompassing term lending institutions. Investments institutions, specialized financial institution and the state level development banks, non banking financial companies (NBFC) and other market intermediaries such has the stock brokers and money lenders are among the oldest of the certain variants of NBFC and the oldest market participants. The asset quality of banks is one of the most important indicators of their financial health. The Indian banking sector has been facing severe problems of increasing Non- Performing Assets (NPAs). The NPAs growth directly and indirectly affects the quality of assets and profitability of banks. It also shows the efficiency of banks credit risk management and the recovery effectiveness. NPA do not generate any income, whereas, the bank is required to make provisions for such as assets that why is a double edge weapon. This paper outlines the concept of quality of bank loans of different types like Housing, Agriculture and MSME loans in state Haryana of selected public and private sector banks. This study is highlighting problems associated with the role of commercial bank in financing Small and Medium Scale Enterprises (SME). The overall objective of the research was to assess the effect of the financing provisions existing for the setting up and operations of MSMEs in the country and to generate recommendations for more robust financing mechanisms for successful operation of the MSMEs, in turn understanding the impact of MSME loans on financial institutions due to NPA. There are many research conducted on the topic of Non- Performing Assets (NPA) Management, concerning particular bank, comparative study of public and private banks etc. In this paper the researcher is considering the aggregate data of selected public sector and private sector banks and attempts to compare the NPA of Housing, Agriculture and MSME loans in state Haryana of public and private sector banks. The tools used in the study are average and Anova test and variance. The findings reveal that NPA is common problem for both public and private sector banks and is associated with all types of loans either that is housing loans, agriculture loans and loans to SMES. NPAs of both public and private sector banks show the increasing trend. In 2010-11 GNPA of public and private sector were at same level it was 2% but after 2010-11 it increased in many fold and at present there is GNPA in some more than 15%. It shows the dark area of Indian banking sector.
EXPERIMENTAL STUDY OF MECHANICAL AND TRIBOLOGICAL RELATION OF NYLON/BaSO4 POL...IAEME Publication
An experiment conducted in this study found that BaSO4 changed Nylon 6's mechanical properties. By changing the weight ratios, BaSO4 was used to make Nylon 6. This Researcher looked into how hard Nylon-6/BaSO4 composites are and how well they wear. Experiments were done based on Taguchi design L9. Nylon-6/BaSO4 composites can be tested for their hardness number using a Rockwell hardness testing apparatus. On Nylon/BaSO4, the wear behavior was measured by a wear monitor, pinon-disc friction by varying reinforcement, sliding speed, and sliding distance, and the microstructure of the crack surfaces was observed by SEM. This study provides significant contributions to ultimate strength by increasing BaSO4 content up to 16% in the composites, and sliding speed contributes 72.45% to the wear rate
ROLE OF SOCIAL ENTREPRENEURSHIP IN RURAL DEVELOPMENT OF INDIA - PROBLEMS AND ...IAEME Publication
The majority of the population in India lives in villages. The village is the back bone of the country. Village or rural industries play an important role in the national economy, particularly in the rural development. Developing the rural economy is one of the key indicators towards a country’s success. Whether it be the need to look after the welfare of the farmers or invest in rural infrastructure, Governments have to ensure that rural development isn’t compromised. The economic development of our country largely depends on the progress of rural areas and the standard of living of rural masses. Village or rural industries play an important role in the national economy, particularly in the rural development. Rural entrepreneurship is based on stimulating local entrepreneurial talent and the subsequent growth of indigenous enterprises. It recognizes opportunity in the rural areas and accelerates a unique blend of resources either inside or outside of agriculture. Rural entrepreneurship brings an economic value to the rural sector by creating new methods of production, new markets, new products and generate employment opportunities thereby ensuring continuous rural development. Social Entrepreneurship has the direct and primary objective of serving the society along with the earning profits. So, social entrepreneurship is different from the economic entrepreneurship as its basic objective is not to earn profits but for providing innovative solutions to meet the society needs which are not taken care by majority of the entrepreneurs as they are in the business for profit making as a sole objective. So, the Social Entrepreneurs have the huge growth potential particularly in the developing countries like India where we have huge societal disparities in terms of the financial positions of the population. Still 22 percent of the Indian population is below the poverty line and also there is disparity among the rural & urban population in terms of families living under BPL. 25.7 percent of the rural population & 13.7 percent of the urban population is under BPL which clearly shows the disparity of the poor people in the rural and urban areas. The need to develop social entrepreneurship in agriculture is dictated by a large number of social problems. Such problems include low living standards, unemployment, and social tension. The reasons that led to the emergence of the practice of social entrepreneurship are the above factors. The research problem lays upon disclosing the importance of role of social entrepreneurship in rural development of India. The paper the tendencies of social entrepreneurship in India, to present successful examples of such business for providing recommendations how to improve situation in rural areas in terms of social entrepreneurship development. Indian government has made some steps towards development of social enterprises, social entrepreneurship, and social in- novation, but a lot remains to be improved.
OPTIMAL RECONFIGURATION OF POWER DISTRIBUTION RADIAL NETWORK USING HYBRID MET...IAEME Publication
Distribution system is a critical link between the electric power distributor and the consumers. Most of the distribution networks commonly used by the electric utility is the radial distribution network. However in this type of network, it has technical issues such as enormous power losses which affect the quality of the supply. Nowadays, the introduction of Distributed Generation (DG) units in the system help improve and support the voltage profile of the network as well as the performance of the system components through power loss mitigation. In this study network reconfiguration was done using two meta-heuristic algorithms Particle Swarm Optimization and Gravitational Search Algorithm (PSO-GSA) to enhance power quality and voltage profile in the system when simultaneously applied with the DG units. Backward/Forward Sweep Method was used in the load flow analysis and simulated using the MATLAB program. Five cases were considered in the Reconfiguration based on the contribution of DG units. The proposed method was tested using IEEE 33 bus system. Based on the results, there was a voltage profile improvement in the system from 0.9038 p.u. to 0.9594 p.u.. The integration of DG in the network also reduced power losses from 210.98 kW to 69.3963 kW. Simulated results are drawn to show the performance of each case.
APPLICATION OF FRUGAL APPROACH FOR PRODUCTIVITY IMPROVEMENT - A CASE STUDY OF...IAEME Publication
Manufacturing industries have witnessed an outburst in productivity. For productivity improvement manufacturing industries are taking various initiatives by using lean tools and techniques. However, in different manufacturing industries, frugal approach is applied in product design and services as a tool for improvement. Frugal approach contributed to prove less is more and seems indirectly contributing to improve productivity. Hence, there is need to understand status of frugal approach application in manufacturing industries. All manufacturing industries are trying hard and putting continuous efforts for competitive existence. For productivity improvements, manufacturing industries are coming up with different effective and efficient solutions in manufacturing processes and operations. To overcome current challenges, manufacturing industries have started using frugal approach in product design and services. For this study, methodology adopted with both primary and secondary sources of data. For primary source interview and observation technique is used and for secondary source review has done based on available literatures in website, printed magazines, manual etc. An attempt has made for understanding application of frugal approach with the study of manufacturing industry project. Manufacturing industry selected for this project study is Mahindra and Mahindra Ltd. This paper will help researcher to find the connections between the two concepts productivity improvement and frugal approach. This paper will help to understand significance of frugal approach for productivity improvement in manufacturing industry. This will also help to understand current scenario of frugal approach in manufacturing industry. In manufacturing industries various process are involved to deliver the final product. In the process of converting input in to output through manufacturing process productivity plays very critical role. Hence this study will help to evolve status of frugal approach in productivity improvement programme. The notion of frugal can be viewed as an approach towards productivity improvement in manufacturing industries.
A MULTIPLE – CHANNEL QUEUING MODELS ON FUZZY ENVIRONMENTIAEME Publication
In this paper, we investigated a queuing model of fuzzy environment-based a multiple channel queuing model (M/M/C) ( /FCFS) and study its performance under realistic conditions. It applies a nonagonal fuzzy number to analyse the relevant performance of a multiple channel queuing model (M/M/C) ( /FCFS). Based on the sub interval average ranking method for nonagonal fuzzy number, we convert fuzzy number to crisp one. Numerical results reveal that the efficiency of this method. Intuitively, the fuzzy environment adapts well to a multiple channel queuing models (M/M/C) ( /FCFS) are very well.
"What does it really mean for your system to be available, or how to define w...Fwdays
We will talk about system monitoring from a few different angles. We will start by covering the basics, then discuss SLOs, how to define them, and why understanding the business well is crucial for success in this exercise.
Introducing BoxLang : A new JVM language for productivity and modularity!Ortus Solutions, Corp
Just like life, our code must adapt to the ever changing world we live in. From one day coding for the web, to the next for our tablets or APIs or for running serverless applications. Multi-runtime development is the future of coding, the future is to be dynamic. Let us introduce you to BoxLang.
Dynamic. Modular. Productive.
BoxLang redefines development with its dynamic nature, empowering developers to craft expressive and functional code effortlessly. Its modular architecture prioritizes flexibility, allowing for seamless integration into existing ecosystems.
Interoperability at its Core
With 100% interoperability with Java, BoxLang seamlessly bridges the gap between traditional and modern development paradigms, unlocking new possibilities for innovation and collaboration.
Multi-Runtime
From the tiny 2m operating system binary to running on our pure Java web server, CommandBox, Jakarta EE, AWS Lambda, Microsoft Functions, Web Assembly, Android and more. BoxLang has been designed to enhance and adapt according to it's runnable runtime.
The Fusion of Modernity and Tradition
Experience the fusion of modern features inspired by CFML, Node, Ruby, Kotlin, Java, and Clojure, combined with the familiarity of Java bytecode compilation, making BoxLang a language of choice for forward-thinking developers.
Empowering Transition with Transpiler Support
Transitioning from CFML to BoxLang is seamless with our JIT transpiler, facilitating smooth migration and preserving existing code investments.
Unlocking Creativity with IDE Tools
Unleash your creativity with powerful IDE tools tailored for BoxLang, providing an intuitive development experience and streamlining your workflow. Join us as we embark on a journey to redefine JVM development. Welcome to the era of BoxLang.
In the realm of cybersecurity, offensive security practices act as a critical shield. By simulating real-world attacks in a controlled environment, these techniques expose vulnerabilities before malicious actors can exploit them. This proactive approach allows manufacturers to identify and fix weaknesses, significantly enhancing system security.
This presentation delves into the development of a system designed to mimic Galileo's Open Service signal using software-defined radio (SDR) technology. We'll begin with a foundational overview of both Global Navigation Satellite Systems (GNSS) and the intricacies of digital signal processing.
The presentation culminates in a live demonstration. We'll showcase the manipulation of Galileo's Open Service pilot signal, simulating an attack on various software and hardware systems. This practical demonstration serves to highlight the potential consequences of unaddressed vulnerabilities, emphasizing the importance of offensive security practices in safeguarding critical infrastructure.
The Microsoft 365 Migration Tutorial For Beginner.pptxoperationspcvita
This presentation will help you understand the power of Microsoft 365. However, we have mentioned every productivity app included in Office 365. Additionally, we have suggested the migration situation related to Office 365 and how we can help you.
You can also read: https://www.systoolsgroup.com/updates/office-365-tenant-to-tenant-migration-step-by-step-complete-guide/
AI in the Workplace Reskilling, Upskilling, and Future Work.pptxSunil Jagani
Discover how AI is transforming the workplace and learn strategies for reskilling and upskilling employees to stay ahead. This comprehensive guide covers the impact of AI on jobs, essential skills for the future, and successful case studies from industry leaders. Embrace AI-driven changes, foster continuous learning, and build a future-ready workforce.
Read More - https://bit.ly/3VKly70
"Scaling RAG Applications to serve millions of users", Kevin GoedeckeFwdays
How we managed to grow and scale a RAG application from zero to thousands of users in 7 months. Lessons from technical challenges around managing high load for LLMs, RAGs and Vector databases.
This talk will cover ScyllaDB Architecture from the cluster-level view and zoom in on data distribution and internal node architecture. In the process, we will learn the secret sauce used to get ScyllaDB's high availability and superior performance. We will also touch on the upcoming changes to ScyllaDB architecture, moving to strongly consistent metadata and tablets.
AppSec PNW: Android and iOS Application Security with MobSFAjin Abraham
Mobile Security Framework - MobSF is a free and open source automated mobile application security testing environment designed to help security engineers, researchers, developers, and penetration testers to identify security vulnerabilities, malicious behaviours and privacy concerns in mobile applications using static and dynamic analysis. It supports all the popular mobile application binaries and source code formats built for Android and iOS devices. In addition to automated security assessment, it also offers an interactive testing environment to build and execute scenario based test/fuzz cases against the application.
This talk covers:
Using MobSF for static analysis of mobile applications.
Interactive dynamic security assessment of Android and iOS applications.
Solving Mobile app CTF challenges.
Reverse engineering and runtime analysis of Mobile malware.
How to shift left and integrate MobSF/mobsfscan SAST and DAST in your build pipeline.
ScyllaDB is making a major architecture shift. We’re moving from vNode replication to tablets – fragments of tables that are distributed independently, enabling dynamic data distribution and extreme elasticity. In this keynote, ScyllaDB co-founder and CTO Avi Kivity explains the reason for this shift, provides a look at the implementation and roadmap, and shares how this shift benefits ScyllaDB users.
Dandelion Hashtable: beyond billion requests per second on a commodity serverAntonios Katsarakis
This slide deck presents DLHT, a concurrent in-memory hashtable. Despite efforts to optimize hashtables, that go as far as sacrificing core functionality, state-of-the-art designs still incur multiple memory accesses per request and block request processing in three cases. First, most hashtables block while waiting for data to be retrieved from memory. Second, open-addressing designs, which represent the current state-of-the-art, either cannot free index slots on deletes or must block all requests to do so. Third, index resizes block every request until all objects are copied to the new index. Defying folklore wisdom, DLHT forgoes open-addressing and adopts a fully-featured and memory-aware closed-addressing design based on bounded cache-line-chaining. This design offers lock-free index operations and deletes that free slots instantly, (2) completes most requests with a single memory access, (3) utilizes software prefetching to hide memory latencies, and (4) employs a novel non-blocking and parallel resizing. In a commodity server and a memory-resident workload, DLHT surpasses 1.6B requests per second and provides 3.5x (12x) the throughput of the state-of-the-art closed-addressing (open-addressing) resizable hashtable on Gets (Deletes).
Session 1 - Intro to Robotic Process Automation.pdfUiPathCommunity
👉 Check out our full 'Africa Series - Automation Student Developers (EN)' page to register for the full program:
https://bit.ly/Automation_Student_Kickstart
In this session, we shall introduce you to the world of automation, the UiPath Platform, and guide you on how to install and setup UiPath Studio on your Windows PC.
📕 Detailed agenda:
What is RPA? Benefits of RPA?
RPA Applications
The UiPath End-to-End Automation Platform
UiPath Studio CE Installation and Setup
💻 Extra training through UiPath Academy:
Introduction to Automation
UiPath Business Automation Platform
Explore automation development with UiPath Studio
👉 Register here for our upcoming Session 2 on June 20: Introduction to UiPath Studio Fundamentals: https://community.uipath.com/events/details/uipath-lagos-presents-session-2-introduction-to-uipath-studio-fundamentals/