The document describes implementing an artificial neural network using backpropagation to predict the cellular localization sites of proteins in a yeast dataset. Specifically:
- A three-layer feedforward neural network with a backpropagation algorithm is simulated in C++ to classify proteins into 10 different localization sites based on 8 attributes in the yeast dataset.
- The yeast dataset and classification scheme are described, including the 8 input attributes and 10 possible output classes representing different cellular locations.
- The backpropagation algorithm is explained and implemented on the simulated neural network to train it using the yeast dataset, with weights updated based on calculated error gradients.
- Results are evaluated by varying the hidden layer nodes and comparing accuracy to other algorithms, to optimize performance
Implementation and Evaluation of Signal Processing Techniques for EEG based B...Damian Quinn
This document compares two approaches for classifying EEG signals from a brain-computer interface - a multi-layer perceptron neural network with Levenberg-Marquardt learning, and an Adaptive Neuro-Fuzzy Inference System. It analyzes EEG data from a dataset involving motor imagery tasks of left and right hand movement. Features are extracted from the EEG signals and both the neural network and ANFIS are used to classify the signals based on the features. The performance of the two classification approaches are then compared to determine if the hybrid ANFIS method can outperform the established neural network approach.
This document summarizes a research paper that proposes a new neural network algorithm called C-Mantec. C-Mantec adds competition between neurons using a thermal perceptron learning rule. This allows existing neurons to continue learning even as new neurons are added. The algorithm was tested on a diabetes dataset and shown to generate compact neural network architectures with good generalization capabilities. It was implemented in an FPGA using Xilinx ISE for synthesis and ModelSim for simulation. The output was obtained over three clock cycles, first loading inputs/weights, then multiplying/accumulating weights, and finally checking the output against a threshold. The study showed C-Mantec can effectively model glucose level fluctuations to determine if a patient has diabetes.
Introduction Of Artificial neural networkNagarajan
The document summarizes different types of artificial neural networks including their structure, learning paradigms, and learning rules. It discusses artificial neural networks (ANN), their advantages, and major learning paradigms - supervised, unsupervised, and reinforcement learning. It also explains different mathematical synaptic modification rules like backpropagation of error, correlative Hebbian, and temporally-asymmetric Hebbian learning rules. Specific learning rules discussed include the delta rule, the pattern associator, and the Hebb rule.
Artificial Neural Network and its Applicationsshritosh kumar
Abstract
This report is an introduction to Artificial Neural
Networks. The various types of neural networks are
explained and demonstrated, applications of neural
networks like ANNs in medicine are described, and a
detailed historical background is provided. The
connection between the artificial and the real thing is
also investigated and explained. Finally, the
mathematical models involved are presented and
demonstrated.
Artificial neural networks seminar presentation using MSWord.Mohd Faiz
This document provides an overview of artificial neural networks. It discusses neural network architectures including feedforward and recurrent networks. It covers neural network learning methods such as supervised learning, unsupervised learning, and reinforcement learning. Backpropagation is described as a method for training neural networks by calculating partial derivatives of the error function. Higher order learning algorithms and considerations for designing neural networks like choosing the number of hidden layers and activation functions are also summarized.
The document discusses artificial neural networks (ANNs). It provides an overview of ANNs, including their biological inspiration from neurons in the brain, their composition of interconnected processing elements called neurons, and how they are configured for applications like pattern recognition. The document also covers different types of ANNs, their computational power, capacity for learning, convergence abilities, and use for generalization. Examples are given of ANN applications in various business domains like marketing, sales forecasting, finance, insurance, and telecommunications. Risks of ANNs discussed include needing a large and diverse training set, overfitting data, and high hardware resource requirements. A hybrid symbolic-neural network approach is also mentioned.
Implementation and Evaluation of Signal Processing Techniques for EEG based B...Damian Quinn
This document compares two approaches for classifying EEG signals from a brain-computer interface - a multi-layer perceptron neural network with Levenberg-Marquardt learning, and an Adaptive Neuro-Fuzzy Inference System. It analyzes EEG data from a dataset involving motor imagery tasks of left and right hand movement. Features are extracted from the EEG signals and both the neural network and ANFIS are used to classify the signals based on the features. The performance of the two classification approaches are then compared to determine if the hybrid ANFIS method can outperform the established neural network approach.
This document summarizes a research paper that proposes a new neural network algorithm called C-Mantec. C-Mantec adds competition between neurons using a thermal perceptron learning rule. This allows existing neurons to continue learning even as new neurons are added. The algorithm was tested on a diabetes dataset and shown to generate compact neural network architectures with good generalization capabilities. It was implemented in an FPGA using Xilinx ISE for synthesis and ModelSim for simulation. The output was obtained over three clock cycles, first loading inputs/weights, then multiplying/accumulating weights, and finally checking the output against a threshold. The study showed C-Mantec can effectively model glucose level fluctuations to determine if a patient has diabetes.
Introduction Of Artificial neural networkNagarajan
The document summarizes different types of artificial neural networks including their structure, learning paradigms, and learning rules. It discusses artificial neural networks (ANN), their advantages, and major learning paradigms - supervised, unsupervised, and reinforcement learning. It also explains different mathematical synaptic modification rules like backpropagation of error, correlative Hebbian, and temporally-asymmetric Hebbian learning rules. Specific learning rules discussed include the delta rule, the pattern associator, and the Hebb rule.
Artificial Neural Network and its Applicationsshritosh kumar
Abstract
This report is an introduction to Artificial Neural
Networks. The various types of neural networks are
explained and demonstrated, applications of neural
networks like ANNs in medicine are described, and a
detailed historical background is provided. The
connection between the artificial and the real thing is
also investigated and explained. Finally, the
mathematical models involved are presented and
demonstrated.
Artificial neural networks seminar presentation using MSWord.Mohd Faiz
This document provides an overview of artificial neural networks. It discusses neural network architectures including feedforward and recurrent networks. It covers neural network learning methods such as supervised learning, unsupervised learning, and reinforcement learning. Backpropagation is described as a method for training neural networks by calculating partial derivatives of the error function. Higher order learning algorithms and considerations for designing neural networks like choosing the number of hidden layers and activation functions are also summarized.
The document discusses artificial neural networks (ANNs). It provides an overview of ANNs, including their biological inspiration from neurons in the brain, their composition of interconnected processing elements called neurons, and how they are configured for applications like pattern recognition. The document also covers different types of ANNs, their computational power, capacity for learning, convergence abilities, and use for generalization. Examples are given of ANN applications in various business domains like marketing, sales forecasting, finance, insurance, and telecommunications. Risks of ANNs discussed include needing a large and diverse training set, overfitting data, and high hardware resource requirements. A hybrid symbolic-neural network approach is also mentioned.
Artificial Neural Network seminar presentation using ppt.Mohd Faiz
- Artificial neural networks are inspired by biological neural networks and learning processes. They attempt to mimic the workings of the brain using simple units called artificial neurons that are connected in networks.
- Learning in neural networks involves modifying the synaptic strengths between neurons through mathematical optimization techniques. The goal is to minimize an error function that measures how well the network can approximate or complete a task.
- Neural networks can learn complex nonlinear functions through training algorithms like backpropagation that determine how to adjust the synaptic weights to improve performance on the learning task.
1. Neural networks are inspired by the human brain and are able to perform complex tasks like pattern recognition much faster than conventional computers. They learn by adjusting the strengths of connections between neurons.
2. The document discusses different types of neural network architectures including single-layer feedforward networks, multilayer feedforward networks, and recurrent networks. Multilayer feedforward networks are commonly used and can be trained with backpropagation.
3. Neural networks operate by receiving inputs, performing computations through interconnected nodes that emulate neurons, and producing outputs. Learning involves modifying the weights between nodes to optimize performance on tasks.
1) Artificial neural networks (ANNs) are processing systems inspired by biological neural networks, consisting of interconnected nodes that process information via algorithms or hardware components. ANNs can accurately model functions like visual processing in the retina.
2) ANNs are useful for problems like facial recognition that are difficult to solve with algorithms due to their ability to learn from examples in a way similar to the human brain.
3) ANNs have many applications, including pattern recognition, modeling complex relationships in large datasets, and real-time systems due to their parallel architecture.
Crude Oil Price Prediction Based on Soft Computing Model: Case Study of IraqKiogyf
This paper proposes using a multi-layer perceptron neural network (MLP-NN) soft computing model to accurately predict future crude oil prices in Iraq. The performance of the MLP-NN model is compared to other neural network approaches and found to perform better, especially with limited training data and high parameter variability. The paper describes the MLP-NN model and its training process using a dataset of Iraqi crude oil prices from 1990 to 2018. Features like mutual information analysis and data normalization are used as part of the model building process.
Neural network and artificial intelligentHapPy SumOn
This document discusses neural networks and artificial intelligence. It defines artificial intelligence as machines programmed to think like humans, and neural networks as computational models inspired by the human brain. The document explains that neural networks are used in artificial intelligence to help machines solve complex problems. It then provides details on the basic structure and learning mechanisms of neural networks, describing how networks are composed of interconnected neurons that can learn from examples to perform tasks like pattern recognition.
This document contains 40 questions about soft computing concepts including neural networks, fuzzy systems, evolutionary computation, and hybrid intelligent systems. The questions cover topics such as the differences between hard and soft computing, components of expert systems, applications of artificial neural networks, types of learning in neural networks, perceptrons, adaptive linear neurons, backpropagation networks, and training algorithms for various neural network architectures.
This document provides an introduction to artificial neural networks. It discusses biological neurons and how artificial neurons are modeled. The key components of a neural network including the network architecture, learning approaches, and the backpropagation algorithm for supervised learning are described. Applications and advantages of neural networks are also mentioned. Neural networks are modeled after the human brain and learn by modifying connection weights between nodes based on examples.
This document summarizes research on neural networks. It discusses the basic structure and components of neural networks, including network topology (feed forward and recurrent), transfer functions, and learning algorithms (supervised, unsupervised, reinforcement). It also overview popular neural network models like multilayer perceptrons, radial basis function networks, Kohonen's self-organizing maps, and Hopfield networks. Finally, it outlines some applications of neural networks such as process control, pattern recognition, and more.
An artificial neural network (ANN) is a computational model inspired by the human brain that can learn from large amounts of data to detect patterns and relationships. ANNs are formed from hundreds of artificial neurons connected by coefficients that are organized in layers. The power of ANNs comes from connecting neurons, with each neuron consisting of a weighted input, transfer function, and single output. ANNs learn by adjusting the weights between neurons to minimize error and reach a specified level of accuracy when trained on data. Once trained, ANNs can be used to make predictions on new input data.
Neural networks are algorithms that mimic the human brain in recognizing patterns in vast amounts of data. They can adapt to new inputs without redesign. Neural networks can be biological, composed of real neurons, or artificial, for solving AI problems. Artificial neural networks consist of processing units like neurons that learn from inputs to produce outputs. They are used for applications like classification, pattern recognition, optimization, and more.
This document discusses different types of artificial neural network topologies. It describes feedforward neural networks, including single layer and multilayer feedforward networks. It also describes recurrent neural networks, which differ from feedforward networks in having at least one feedback loop. Single layer networks have an input and output layer, while multilayer networks have one or more hidden layers between the input and output layers. Recurrent networks can learn temporal patterns due to their internal memory capabilities.
This document provides an overview of artificial neural networks. It describes the biological neuron model that inspired artificial networks, with dendrites receiving inputs, the soma processing them, the axon transmitting outputs, and synapses connecting neurons. An artificial neuron model is presented that uses weighted inputs, a summation function, and an activation function to generate outputs. The document discusses unsupervised and supervised learning methods, and lists applications such as character recognition, stock prediction, and medicine. Advantages include human-like thinking and handling noisy data, while disadvantages include the need for training and high processing times.
This document provides an overview of artificial neural networks (ANNs). It discusses ANN basics such as their structure being inspired by biological neural networks in the brain. The document covers different types of ANNs including feedforward and feedback networks. It also discusses ANN properties like learning strategies, applications, advantages like handling noisy data, and disadvantages like requiring training. The conclusion states that ANNs are flexible and suited for real-time systems due to their parallel architecture.
Neural networks can be biological models of the brain or artificial models created through software and hardware. The human brain consists of interconnected neurons that transmit signals through connections called synapses. Artificial neural networks aim to mimic this structure using simple processing units called nodes that are connected by weighted links. A feed-forward neural network passes information in one direction from input to output nodes through hidden layers. Backpropagation is a common supervised learning method that uses gradient descent to minimize error by calculating error terms and adjusting weights between layers in the network backwards from output to input. Neural networks have been applied successfully to problems like speech recognition, character recognition, and autonomous vehicle navigation.
An artificial neural network is a mathematical model that maps inputs to outputs. It consists of an input layer, hidden layers, and an output layer connected by weights and biases. Activation functions determine the output of each node. Training a neural network involves adjusting the weights and biases through backpropagation to minimize a loss function and improve predictions based on the input data. Feedforward involves calculating predictions, while backpropagation calculates gradients to update weights and biases through gradient descent.
Employing Neocognitron Neural Network Base Ensemble Classifiers To Enhance Ef...cscpconf
This paper presents an ensemble of neo-cognitron neural network base classifiers to enhance
the accuracy of the system, along the experimental results. The method offers lesser
computational preprocessing in comparison to other ensemble techniques as it ex-preempts
feature extraction process before feeding the data into base classifiers. This is achieved by the
basic nature of neo-cognitron, it is a multilayer feed-forward neural network. Ensemble of such
base classifiers gives class labels for each pattern that in turn is combined to give the final class
label for that pattern. The purpose of this paper is not only to exemplify learning behaviour of
neo-cognitron as base classifiers, but also to purport better fashion to combine neural network
based ensemble classifiers.
Pattern recognition system based on support vector machinesAlexander Decker
This document describes a study that uses support vector machines (SVM) to develop quantitative structure-activity relationship (QSAR) models for predicting the anti-HIV activity of 1,3,4-oxadiazole substituted naphthyridine derivatives based on their molecular descriptors. The SVM model achieved a cross-validation R2 value of 0.90 and RMSE of 0.145, outperforming artificial neural network and multiple linear regression models. An external validation on an independent test set found the SVM model had an R value of 0.96 and RMSE of 0.166, demonstrating good predictive ability.
This document introduces artificial neural networks and their relationship to biological neural networks. It discusses the basic components and functioning of artificial neural networks, including nodes, links, weights, and learning. Different network architectures are described, including single layer feedforward networks and multilayer feedforward networks. Supervised, unsupervised, and reinforced learning methods are also summarized. Applications of artificial neural networks include areas like airline security, investment management, and sales forecasting.
Open CV Implementation of Object Recognition Using Artificial Neural Networksijceronline
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
The document describes an approach to using back propagation neural networks to detect sub-lines of various orientations from preprocessed images. The approach breaks images into 8x8 pixel sub-images and trains 8 neural networks to classify each sub-image as belonging to one of 8 predefined line categories. The networks are trained on sample sub-images that are labeled with the correct category. The goal is to detect sub-lines that could help with robot navigation by further processing the network outputs.
Modeling of neural image compression using gradient decent technologytheijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
The papers for publication in The International Journal of Engineering& Science are selected through rigorous peer reviews to ensure originality, timeliness, relevance, and readability.
Theoretical work submitted to the Journal should be original in its motivation or modeling structure. Empirical analysis should be based on a theoretical framework and should be capable of replication. It is expected that all materials required for replication (including computer programs and data sets) should be available upon request to the authors.
The International Journal of Engineering & Science would take much care in making your article published without much delay with your kind cooperation
Artificial Neural Network seminar presentation using ppt.Mohd Faiz
- Artificial neural networks are inspired by biological neural networks and learning processes. They attempt to mimic the workings of the brain using simple units called artificial neurons that are connected in networks.
- Learning in neural networks involves modifying the synaptic strengths between neurons through mathematical optimization techniques. The goal is to minimize an error function that measures how well the network can approximate or complete a task.
- Neural networks can learn complex nonlinear functions through training algorithms like backpropagation that determine how to adjust the synaptic weights to improve performance on the learning task.
1. Neural networks are inspired by the human brain and are able to perform complex tasks like pattern recognition much faster than conventional computers. They learn by adjusting the strengths of connections between neurons.
2. The document discusses different types of neural network architectures including single-layer feedforward networks, multilayer feedforward networks, and recurrent networks. Multilayer feedforward networks are commonly used and can be trained with backpropagation.
3. Neural networks operate by receiving inputs, performing computations through interconnected nodes that emulate neurons, and producing outputs. Learning involves modifying the weights between nodes to optimize performance on tasks.
1) Artificial neural networks (ANNs) are processing systems inspired by biological neural networks, consisting of interconnected nodes that process information via algorithms or hardware components. ANNs can accurately model functions like visual processing in the retina.
2) ANNs are useful for problems like facial recognition that are difficult to solve with algorithms due to their ability to learn from examples in a way similar to the human brain.
3) ANNs have many applications, including pattern recognition, modeling complex relationships in large datasets, and real-time systems due to their parallel architecture.
Crude Oil Price Prediction Based on Soft Computing Model: Case Study of IraqKiogyf
This paper proposes using a multi-layer perceptron neural network (MLP-NN) soft computing model to accurately predict future crude oil prices in Iraq. The performance of the MLP-NN model is compared to other neural network approaches and found to perform better, especially with limited training data and high parameter variability. The paper describes the MLP-NN model and its training process using a dataset of Iraqi crude oil prices from 1990 to 2018. Features like mutual information analysis and data normalization are used as part of the model building process.
Neural network and artificial intelligentHapPy SumOn
This document discusses neural networks and artificial intelligence. It defines artificial intelligence as machines programmed to think like humans, and neural networks as computational models inspired by the human brain. The document explains that neural networks are used in artificial intelligence to help machines solve complex problems. It then provides details on the basic structure and learning mechanisms of neural networks, describing how networks are composed of interconnected neurons that can learn from examples to perform tasks like pattern recognition.
This document contains 40 questions about soft computing concepts including neural networks, fuzzy systems, evolutionary computation, and hybrid intelligent systems. The questions cover topics such as the differences between hard and soft computing, components of expert systems, applications of artificial neural networks, types of learning in neural networks, perceptrons, adaptive linear neurons, backpropagation networks, and training algorithms for various neural network architectures.
This document provides an introduction to artificial neural networks. It discusses biological neurons and how artificial neurons are modeled. The key components of a neural network including the network architecture, learning approaches, and the backpropagation algorithm for supervised learning are described. Applications and advantages of neural networks are also mentioned. Neural networks are modeled after the human brain and learn by modifying connection weights between nodes based on examples.
This document summarizes research on neural networks. It discusses the basic structure and components of neural networks, including network topology (feed forward and recurrent), transfer functions, and learning algorithms (supervised, unsupervised, reinforcement). It also overview popular neural network models like multilayer perceptrons, radial basis function networks, Kohonen's self-organizing maps, and Hopfield networks. Finally, it outlines some applications of neural networks such as process control, pattern recognition, and more.
An artificial neural network (ANN) is a computational model inspired by the human brain that can learn from large amounts of data to detect patterns and relationships. ANNs are formed from hundreds of artificial neurons connected by coefficients that are organized in layers. The power of ANNs comes from connecting neurons, with each neuron consisting of a weighted input, transfer function, and single output. ANNs learn by adjusting the weights between neurons to minimize error and reach a specified level of accuracy when trained on data. Once trained, ANNs can be used to make predictions on new input data.
Neural networks are algorithms that mimic the human brain in recognizing patterns in vast amounts of data. They can adapt to new inputs without redesign. Neural networks can be biological, composed of real neurons, or artificial, for solving AI problems. Artificial neural networks consist of processing units like neurons that learn from inputs to produce outputs. They are used for applications like classification, pattern recognition, optimization, and more.
This document discusses different types of artificial neural network topologies. It describes feedforward neural networks, including single layer and multilayer feedforward networks. It also describes recurrent neural networks, which differ from feedforward networks in having at least one feedback loop. Single layer networks have an input and output layer, while multilayer networks have one or more hidden layers between the input and output layers. Recurrent networks can learn temporal patterns due to their internal memory capabilities.
This document provides an overview of artificial neural networks. It describes the biological neuron model that inspired artificial networks, with dendrites receiving inputs, the soma processing them, the axon transmitting outputs, and synapses connecting neurons. An artificial neuron model is presented that uses weighted inputs, a summation function, and an activation function to generate outputs. The document discusses unsupervised and supervised learning methods, and lists applications such as character recognition, stock prediction, and medicine. Advantages include human-like thinking and handling noisy data, while disadvantages include the need for training and high processing times.
This document provides an overview of artificial neural networks (ANNs). It discusses ANN basics such as their structure being inspired by biological neural networks in the brain. The document covers different types of ANNs including feedforward and feedback networks. It also discusses ANN properties like learning strategies, applications, advantages like handling noisy data, and disadvantages like requiring training. The conclusion states that ANNs are flexible and suited for real-time systems due to their parallel architecture.
Neural networks can be biological models of the brain or artificial models created through software and hardware. The human brain consists of interconnected neurons that transmit signals through connections called synapses. Artificial neural networks aim to mimic this structure using simple processing units called nodes that are connected by weighted links. A feed-forward neural network passes information in one direction from input to output nodes through hidden layers. Backpropagation is a common supervised learning method that uses gradient descent to minimize error by calculating error terms and adjusting weights between layers in the network backwards from output to input. Neural networks have been applied successfully to problems like speech recognition, character recognition, and autonomous vehicle navigation.
An artificial neural network is a mathematical model that maps inputs to outputs. It consists of an input layer, hidden layers, and an output layer connected by weights and biases. Activation functions determine the output of each node. Training a neural network involves adjusting the weights and biases through backpropagation to minimize a loss function and improve predictions based on the input data. Feedforward involves calculating predictions, while backpropagation calculates gradients to update weights and biases through gradient descent.
Employing Neocognitron Neural Network Base Ensemble Classifiers To Enhance Ef...cscpconf
This paper presents an ensemble of neo-cognitron neural network base classifiers to enhance
the accuracy of the system, along the experimental results. The method offers lesser
computational preprocessing in comparison to other ensemble techniques as it ex-preempts
feature extraction process before feeding the data into base classifiers. This is achieved by the
basic nature of neo-cognitron, it is a multilayer feed-forward neural network. Ensemble of such
base classifiers gives class labels for each pattern that in turn is combined to give the final class
label for that pattern. The purpose of this paper is not only to exemplify learning behaviour of
neo-cognitron as base classifiers, but also to purport better fashion to combine neural network
based ensemble classifiers.
Pattern recognition system based on support vector machinesAlexander Decker
This document describes a study that uses support vector machines (SVM) to develop quantitative structure-activity relationship (QSAR) models for predicting the anti-HIV activity of 1,3,4-oxadiazole substituted naphthyridine derivatives based on their molecular descriptors. The SVM model achieved a cross-validation R2 value of 0.90 and RMSE of 0.145, outperforming artificial neural network and multiple linear regression models. An external validation on an independent test set found the SVM model had an R value of 0.96 and RMSE of 0.166, demonstrating good predictive ability.
This document introduces artificial neural networks and their relationship to biological neural networks. It discusses the basic components and functioning of artificial neural networks, including nodes, links, weights, and learning. Different network architectures are described, including single layer feedforward networks and multilayer feedforward networks. Supervised, unsupervised, and reinforced learning methods are also summarized. Applications of artificial neural networks include areas like airline security, investment management, and sales forecasting.
Open CV Implementation of Object Recognition Using Artificial Neural Networksijceronline
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
The document describes an approach to using back propagation neural networks to detect sub-lines of various orientations from preprocessed images. The approach breaks images into 8x8 pixel sub-images and trains 8 neural networks to classify each sub-image as belonging to one of 8 predefined line categories. The networks are trained on sample sub-images that are labeled with the correct category. The goal is to detect sub-lines that could help with robot navigation by further processing the network outputs.
Modeling of neural image compression using gradient decent technologytheijes
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
The papers for publication in The International Journal of Engineering& Science are selected through rigorous peer reviews to ensure originality, timeliness, relevance, and readability.
Theoretical work submitted to the Journal should be original in its motivation or modeling structure. Empirical analysis should be based on a theoretical framework and should be capable of replication. It is expected that all materials required for replication (including computer programs and data sets) should be available upon request to the authors.
The International Journal of Engineering & Science would take much care in making your article published without much delay with your kind cooperation
This document provides an overview of applications of fuzzy logic in neural networks. It discusses fuzzy neurons as a combination of fuzzy logic and neural networks where the neuron's activation function is replaced with a fuzzy logic operation. Different types of fuzzy neurons are described, including OR, AND, and OR/AND fuzzy neurons. Supervised learning in fuzzy neural networks is also covered. The document concludes with advantages of fuzzy logic systems over traditional neural networks, such as the ability of fuzzy systems to systematically include linguistic knowledge.
Diagnosis Chest Diseases Using Neural Network and Genetic Hybrid AlgorithmIJERA Editor
The back propagation algorithm is most popular algorithm in feed forward neural network with the multi-layer. It measures the output error and calculates the gradient of the error and adjusting the ANN weight moving along the descending gradient direction. Back propagation is used to learn and store by mapping relations of input- output model. A genetic algorithm is having a random probability distribution or pattern that may be analyses statistically but may not be predicted precisely. Genetic algorithm is an iterative procedure that generates new population for individual from the old one. In my paper I am proposing to implement the back propagation algorithm and genetic algorithm to compare the output accuracy percent for medical diagnosis on various chest diseases (Asthme, tuberculosis, lung cancer, pneumonia).
Artificial Neural Networks (ANNS) For Prediction of California Bearing Ratio ...IJMER
The behaviour of soil at the location of the project and interactions of the earth materials during and after construction has a major influence on the success, economy and safety of the work. Another complexity associated with some geotechnical engineering materials, such as sand and gravel, is the difficulty in obtaining undisturbed samples and time consuming involving skilled
technician. Knowledge of California Bearing Ratio (C.B.R) is essential in finding the road thickness. To cope up with the difficulties involved, an attempt has been made to model C.B.R in terms of Fine Fraction, Liquid Limit, Plasticity Index, Maximum Dry density, and Optimum Moisture content. A multi-layer perceptron network with feed forward back propagation is used to model varying the
number of hidden layers. For this purposes 50 soils test data was collected from the laboratory test
results. Among the test data 30 soils data is used for training and remaining 20 soils for testing using
60-40 distribution. The architectures developed are 5-4-1, 5-5-1, and 5-6-1. Model with 5-6-1 architecture is found to be quite satisfactory in predicting C.B.R of soils. A graph is plotted between
the predicted values and observed values of outputs for training and testing process, from the graph it
is found that all the points are close to equality line, indicating predicted values are close to observed
values
This document summarizes the basics of neural networks and provides an example of fitting a neural network model in R. It explains that a neural network uses an activation function to transform input into output using interconnected processing units. A multilayer neural network can solve non-linear problems by passing information through an input, hidden and output layer connected by weighted connections. The document then demonstrates how to fit a neural network in R to predict cereal ratings using variables like calories and fiber, by first creating training and test datasets and then scaling the data before fitting the model.
Neural network based numerical digits recognization using nnt in matlabijcses
Artificial neural networks are models inspired by human nervous system that is capable of learning. One of
the important applications of artificial neural network is character Recognition. Character Recognition
finds its application in number of areas, such as banking, security products, hospitals, in robotics also.
This paper is based on a system that recognizes a english numeral, given by the user, which is already
trained on the features of the numbers to be recognized using NNT (Neural network toolbox) .The system
has a neural network as its core, which is first trained on a database. The training of the neural network
extracts the features of the English numbers and stores in the database. The next phase of the system is to
recognize the number given by the user. The features of the number given by the user are extracted and
compared with the feature database and the recognized number is displayed.
This document discusses characterizing polymeric membranes under large deformations using an artificial neural network model. It presents an experimental study of blowing circular thermoplastic ABS membranes using free blowing technique. A multilayer neural network is used to model the non-linear behavior of the membrane under biaxial deformation. The neural network results are compared to experimental data and a finite difference model using a hyperelastic Mooney-Rivlin model. The neural network accurately reproduces the membrane behavior with minimal error margins compared to experimental measurements.
A NEW TECHNIQUE INVOLVING DATA MINING IN PROTEIN SEQUENCE CLASSIFICATIONcscpconf
Feature selection is more accurate technique in protein sequence classification. Researchers apply some well-known classification techniques like neural networks, Genetic algorithm, Fuzzy ARTMAP, Rough Set Classifier etc for extracting features.This paper presents a review is with
three different classification models such as fuzzy ARTMAP model, neural network model and Rough set classifier model.This is followed by a new technique for classifying protein
sequences.The proposed model is typically implemented with an own designed tool using JAVA and tries to prove that it reduce the computational overheads encountered by earlier
approaches and also increase the accuracy of classification.
This document describes a research project aimed at creating an application that can predict whether chemical compounds will be able to pass through the blood-brain barrier. It discusses using a hybrid approach of artificial neural networks and genetic algorithms. The neural networks are trained on a dataset of chemical properties paired with blood-brain barrier permeability labels. The genetic algorithm is used to optimize parameters of the neural networks like the number of hidden nodes, learning rate, and momentum. The results section indicates the performance of the basic neural network and hybrid genetic algorithm neural network approaches were evaluated using statistical measures of sensitivity and specificity based on their predictions on the test dataset.
2. NEURAL NETWORKS USING GENETIC ALGORITHMS.pptxssuser67281d
This document discusses using genetic algorithms to train neural networks. It begins by defining evolutionary artificial neural networks as combining neural networks with genetic algorithms. Genetic algorithms can be used to choose neural network structures and properties like neuron functions. The document then provides background on neural networks and genetic algorithms. It describes how genetic algorithms use selection, crossover and mutation to optimize solutions over generations. The document proposes using a genetic algorithm to train neural network weights and applies this approach to the traveling salesman problem. It concludes that while these techniques are powerful, they also have limitations as "black boxes" that require pre-processing of inputs.
This document proposes a new method for extracting rules from trained multilayer artificial neural networks that can represent rules in both "if-then" and "M of N" formats. The method extracts an intermediate structure called a "generator list" from which both types of rules can be derived. This provides a more generic representation than existing methods that can only output one rule format. The generator list approach avoids preprocessing steps used in other methods that can modify the original network. It uses heuristics to prune the search space when extracting the generator list to address the computational complexity involved.
This document provides instructions for three exercises using artificial neural networks (ANNs) in Matlab: function fitting, pattern recognition, and clustering. It begins with background on ANNs including their structure, learning rules, training process, and common architectures. The exercises then guide using ANNs in Matlab for regression to predict house prices from data, classification of tumors as benign or malignant, and clustering of data. Instructions include loading data, creating and training networks, and evaluating results using both the GUI and command line. Improving results through retraining or adding neurons is also discussed.
Implementation Of Back-Propagation Neural Network For Isolated Bangla Speech ...ijistjournal
This paper is concerned with the development of Back-propagation Neural Network for Bangla Speech Recognition. In this paper, ten bangla digits were recorded from ten speakers and have been recognized. The features of these speech digits were extracted by the method of Mel Frequency Cepstral Coefficient (MFCC) analysis. The mfcc features of five speakers were used to train the network with Back propagation algorithm. The mfcc features of ten bangla digit speeches, from 0 to 9, of another five speakers were used to test the system. All the methods and algorithms used in this research were implemented using the features of Turbo C and C++ languages. From our investigation it is seen that the developed system can successfully encode and analyze the mfcc features of the speech signal to recognition. The developed system achieved recognition rate about 96.332% for known speakers (i.e., speaker dependent) and 92% for unknown speakers (i.e., speaker independent).
Implementation Of Back-Propagation Neural Network For Isolated Bangla Speech ...ijistjournal
This document describes the implementation of a back-propagation neural network for isolated Bangla speech recognition. The network was trained on Mel Frequency Cepstral Coefficient (MFCC) features extracted from recordings of 10 Bangla digits spoken by 10 speakers. The network architecture included an input layer of 250 neurons, a hidden layer of 16 neurons, and an output layer of 10 neurons. The network was trained using backpropagation and achieved a recognition rate of 96.3% for known speakers and 92% for unknown speakers. The system demonstrates the potential for developing speaker-independent isolated digit speech recognition in Bangla.
A Parallel Framework For Multilayer Perceptron For Human Face RecognitionCSCJournals
Artificial neural networks have already shown their success in face recognition and similar complex pattern recognition tasks. However, a major disadvantage of the technique is that it is extremely slow during training for larger classes and hence not suitable for real-time complex problems such as pattern recognition. This is an attempt to develop a parallel framework for the training algorithm of a perceptron. In this paper, two general architectures for a Multilayer Perceptron (MLP) have been demonstrated. The first architecture is All-Class-in-One-Network (ACON) where all the classes are placed in a single network and the second one is One-Class-in-One-Network (OCON) where an individual single network is responsible for each and every class. Capabilities of these two architectures were compared and verified in solving human face recognition, which is a complex pattern recognition task where several factors affect the recognition performance like pose variations, facial expression changes, occlusions, and most importantly illumination changes. Experimental results show that the proposed OCON structure performs better than the conventional ACON in terms of network training convergence speed and which can be easily exercised in a parallel environment.
PADDY CROP DISEASE DETECTION USING SVM AND CNN ALGORITHMIRJET Journal
- The document discusses a study on detecting diseases in paddy/rice crops using deep learning algorithms like convolutional neural networks (CNN) and support vector machines (SVM).
- A dataset of rice leaf images was created and a CNN model using transfer learning with MobileNet was developed and trained on the dataset to classify rice diseases.
- The proposed method aims to automatically classify rice disease images to help farmers more accurately identify diseases, as manual identification can be difficult and inaccurate. This could help improve treatment and support farmers.
IRJET-Breast Cancer Detection using Convolution Neural NetworkIRJET Journal
This document discusses using a convolutional neural network (CNN) to detect breast cancer from medical images. CNNs are a type of deep learning model that can learn image features without manual feature engineering. The proposed system would take a sample medical image as input, preprocess it, and compare it to images in a database labeled as cancerous or non-cancerous. If cancer is detected, the system would determine the cancer stage and recommend appropriate treatment. The CNN model would be built and trained using libraries like Keras, TensorFlow, and Numpy to classify images and detect breast cancer at early stages for better treatment outcomes.
Artificial Neural Network: A brief studyIRJET Journal
This document provides an overview of artificial neural networks (ANN), including:
- ANNs are computational models inspired by the human brain that are designed to analyze and draw conclusions from experiences. They contain interconnected nodes that work together to solve problems.
- The key components of an ANN include an input layer, one or more hidden layers, and an output layer. Data is fed into the input layer and passes through the hidden layers before emerging as output.
- ANNs can be trained to learn from large datasets using supervised, unsupervised, or reinforcement learning techniques. The weights between nodes are adjusted during training to minimize error between the network's predictions and correct outputs.
- Once trained, ANNs can
This document describes a study that uses artificial neural networks to predict the cellular localization sites of proteins in yeast. It introduces the problem, provides background on neural networks and the yeast protein data set. It then outlines the proposed stages of work, including simulating the network, implementing the backpropagation algorithm, training the network, and obtaining results. The results section analyzes the yeast data set attributes and class statistics, compares the accuracy of the proposed method to other algorithms, and examines how accuracy varies with network parameters like number of iterations and hidden nodes. The conclusion is that the method achieves a higher accuracy than others and performance stabilizes after a certain number of iterations and hidden nodes.
This document provides an overview and summary of a student project report on simulating a feed forward artificial neural network in C++. The report includes an abstract, table of contents, list of figures, and 5 chapters that discuss the objectives of the project, provide background on artificial neural networks, describe the design and implementation of a 3-layer feed forward neural network using backpropagation, present the results, and provide references. The design section explains the backpropagation algorithm and provides pseudocode for calculating outputs at each layer. The implementation section provides pseudocode for training patterns and minimizing error.
This document discusses various data mining techniques, including artificial neural networks. It provides an overview of the knowledge discovery in databases process and the cross-industry standard process for data mining. It also describes techniques such as classification, clustering, regression, association rules, and neural networks. Specifically, it discusses how neural networks are inspired by biological neural networks and can be used to model complex relationships in data.
This document provides an overview of artificial neural networks and their application in data mining techniques. It discusses neural networks as a tool that can be used for data mining, though some practitioners are wary of them due to their opaque nature. The document also outlines the data mining process and some common data mining techniques like classification, clustering, regression, and association rule mining. It notes that neural networks, as a predictive modeling technique, can be useful for problems like classification and prediction.
The document summarizes a student's project report on developing a tool to calculate indicators that characterize spatial networks. It includes:
1) An overview of the project which involved designing a program to calculate indicators for spatial networks based on a research paper and feedback from supervisors.
2) Details on the motivation, proposed structure, selected indicators to implement (degree, displacement, route factor, binary tree, Strahler index, asymmetry factor) and development of the program code.
3) How the program takes spatial network graph data and text files as input, calculates the selected indicators, and outputs the results to text files after processing and debugging.
This document contains the academic and professional qualifications, experience, positions of responsibility, and achievements of Vaibhav Dhattarwal. He holds a PGDM from IIM Rohtak with specializations in IT. He has work experience as a Performance Engineer at Cognizant Technologies and has completed internships in France and India. He held leadership roles as Member of the IT Committee at IIM Rohtak and as Coordinator for a gaming competition. He has received several academic honors and awards.
1. PREDICTING THE CELLULAR
LOCALIZATION SITES OF PROTEINS
USING ARTIFICIAL NEURAL
NETWORKS
Vaibhav Dhattarwal
Department of Computer Science and Engineering
Indian Institute of Technology Roorkee
vaibhav.csi.iitr@gmail.com
Abstract - In this paper, I present a brief description
of how a feed-forward artificial neural network was
implemented in C++. In the introduction to my paper,
I begin by explaining as the reason for implementing
this artificial neural network was to predict the
cellular localisation sites in proteins, and to be specific
a yeast Data Set. This is followed by a concise
explanation of the design and implementation of a
three-layer feed forward neural network using back
propagation algorithm. Also explained along with are
the attributes of the data set and the output location
possibilities in the protein. This is followed by a step-
by-step breakdown of how I approached the project.
The implementation of the network is explained along
with how the algorithm is executed within the code.
Finally we can see the results as we vary the
parameters associated with the implemented artificial
neural network.
Keywords-Prediction, Localization Sites, Proteins,
Simulation, NeuralNetworks
I. INTRODUCTION
Let me start off by the basic explanation about
choosing this topic. The topic chosen, Prediction of
Cellular localisation of protein, is basically the
information represented by the data set I have
chosen to do my paper on. I will be implementing
an Artificial Neural Network based on the back
propagation algorithm. To evaluate the
performance of the simulated Artificial Neural
Network, I needed to choose a data set to train and
test the ANN. Let us take a look at the significance
of the data set chosen by me. If one is able to
deduce or figure out the sub cellular location of a
protein, I can interpret its function, its part in
healthy processes and also in commencement of
disease, and its probable usage as a drug target.
Other methods such as experiments used to
ascertain the sub cellular location of a protein have
advantages such as reliability and accuracy along
with disadvantages such as being slow and being
labour-intensive. If I compare to the above
described methods, large throughput computation
based forecasting tools enable me to deduce
information which is difficult to attain. As an
example, for those proteins whose composition is
found out from a genomic sequence, computational
methods are better as they may be tough to confine,
produce, or locate in an experiment.
The sub cellular location of a protein can provide
valuable information about the role it has in the
cellular dynamics. If I may suggest, there has been
an unprecedented surge in the amount of sequenced
genomic data available, which in turn calls out for a
computerized and high-accuracy tool which can be
used to predict sub cellular location to become
increasingly important. There have been lots of
efforts to predict properly the protein sub cellular
location. This paper aims to assimilate the artificial
neural networks and the field of bioinformatics to
predict the location of protein in yeast genome. I
introduce a new sub cellular prediction method
based on a back propagation neural network.
2. The statement goes like this “Prediction of Cellular
Localization sites of proteins using artificial neural
networks”
The task of our paper lies first in simulating a three
layered artificial neural network. In this case, the
backpropagation algorithm is used to train the
artificial neural network. First we explain the
algorithm, and then in our implementation it is
shown as to how the algorithm is implemented in
the code used to simulate the artificial neural
network. After this we see the observations
recorded by executing the yeast data set on the
simulated artificial neural network to train it. Then
we use the observations to see trends and evaluate
performance.
II. PROPOSED METHODOLOGY
A. Simulate an artificial neural network
corresponding to the attributesof the yeast data
set.
To enlarge the function space that the neural
network can represent, we implement the three-
layer feed-forward network which involves one
layer of hidden nodes. If we have our middle layer
with large number of nodes, we can represent
almost any continuous function with acceptable
levels of accuracy.
Figure 1: illustrates the structure of a three-layer
feed forward neural network.
The definitions of input nodes and output nodes can
be looked upon as similar to the earlier discussed
perceptrons network. The major difference is that
we a single layer of hidden nodes between the input
and output nodes.
Similarly, we also use the ratio of correctly
classified examples in the training set as the
threshold for the termination condition. The major
difference of the algorithm for training two-layer
feed-forward neural network is when we update the
weights for the hidden layer, we should back-
propagate the error from the output layer to the
hidden layer.
B. Implement the back propagation algorithmon
the simulated artificial neural network.
The Algorithm for our three layer network:
a. Initialize the weights of the network.
b. Perform the following operation
1. for every example in the training set
Output by the neural network
for this example denoted by
O(forward pass)
Teaching Output for this
example denoted by T.
The error is given by (T-O).
Calculate ΔWHO forall weights
between hidden and output
layer.
Move backwards in the
network(backward pass)
Calculate ΔWIH for all weights
between input and hidden layer.
Update the weights if the
network using the calculated
delta values.
c. Stop when the error criterion is met.
d. Return the trained network
The learning algorithm that we have chosen for our
network is the Backpropagation Algorithm. It can
be divided into two stages:
Stage One: Propagation Phase
This phase consists ofthe following operations:
1. First we do the forward propagation of our
training pattern's input data through the
network.
2. Secondly we do the backward propagation
of the initial propagation of first step and
use the output activations through the
network using our training pattern's desire
target data.
Stage two: Weight updating Phase
In this stage, for every connection possessing a
weight, the following operations are carried out:
3. 1. First, we multiply the output delta with
input to calculate the gradient of the
weight.
2. Second, we subtract a ratio of the gradient
from the weight. This brings the weight in
backward direction of the gradient.
We keep on repeating stages one and two until the
network starts performing with acceptable success
rate.
C. Train the network using the data set.
The yeast data set has eight attributes. These
attributes were calculated from amino acid
sequences.
1. erl: It is representative of the lumen in the
endoplasmic reticulum in the cell. This
attribute tells whether an HDEL pattern as
n signal for retention is present or not.
2. vac: This attribute gives an indication of
the content of amino acids in vacuolar and
extracellular proteins after performing a
discriminant analysis.
3. mit: This attribute gives the composition
of N terminal region, which has twenty
residue of mitochondrial as well as non-
mitochondrial protein after performing a
discriminant analysis.
4. nuc: This feature tells us about nuclear
localization patterns as to whether they are
present or not. It also holds some
information about the frequency of basic
residues.
5. pox: This attribute provides the
composition of the sequence of protein
after discriminant analysis on them. Not
only this, it also indicates the presence of a
short sequence motif.
6. mcg: This is a parameter used in a signal
sequence detection method known as
McGeoch. However in this case we are
using a modified version of it.
7. gvh: This attribute represents a weight
matrix based procedure and is used to
detect signal sequences which are
cleavable.
8. alm: This final feature helps us by
performing identification on the entire
sequence for membrane spanning regions.
For the data set the output classes are summarized
below. Remember that the localization site is
represented by the class as output. Here are the
various classes:
1. CYT (cytosolic or cytoskeletal)
2. NUC (nuclear)
3. MIT (mitochondrial)
4. ME3 (membrane protein, no N-terminal signal)
5. ME2 (membrane protein, uncleaved signal)
6. ME1 (membrane protein, cleaved signal)
7. EXC (extracellular)
8. VAC (vacuolar)
9. POX (peroxisomal)
10. ERL (endoplasmic reticulum lumen)
Figure 2: a Yeast Cell.
D. Obtain results and compare performance with
other networks and techniques used for predicting
the cellular localization ofproteins
Results are evaluated after using the data set on
the simulated artificial neural network.
Varying the number of nodes in the hidden
layer is used to evaluate performance.
Comparison of Accuracies of various
algorithms
Variation of success rate with number of
iterations
Variation of success rate with number of nodes
in hidden layer
III. IMPLEMENTATION
Figure 3: design for calculating output activation
4. Er = 0.0 ;
for all patterns in the training set
do // computes for all training patterns(E) //
for all elements in hidden layer [ NumUnitHidden ]
do
InputHidden[E][j] = WtInput/Hidden[0][j]
for all elements in input layer [ NumUnitInput ]
do
Add to InputHidden[E] [j] thesum over OutputInput[E] [i] * WtInput/Hidden [i][j]
end for
Computesigmoid for output
end for
for all elements in output layer [ NumUnitoutput ]
do
InputOutput[E] [k] = WtHidden/Output[0][k]
for all elements in hidden layer [ NumUnitHidden ]
do
Add to InputOutput [E] [k] sum over OutputHidden[E] [j] * WtHidden/Output [j][k]
end for
Computesigmoid for output
Add to Er the sum over the product (1/2) * (Final[E][k] - Output[E][k]) * (Final[E][k] -
Output[E][k]) ;
ΔOutput[k] = (Final[E][k] - Output[E][k]) * Output[E][k] * (1 - Output[E][k])
// derivative of thefunction //
end for
for all elements in hidden layer [ NumUnitHidden ]
do // Backpropagation of error towards hidden layer //
Sum of ΔOutput [j] = 0.0
for all elements in output layer [ NumUnitOutput ]
do
Add to Sum of ΔOutput [j] the sum over the product WtHidden/Output [j][k] * ΔOutput
[k] ;
end for
ΔH[j] = Sum of ΔOutput [j] * OutputHidden [E][j] * (1.0 - OutputHidden [E][j])
// derivative of thefunction //
end for
for all elements in hidden layer [ NumUnitHidden ]
do // This loop updates the weight input to hidden //
Add to ΔWih [0][j] thesum of: product β * ΔH [j] to theproduct:α * ΔWih [0][j]
Add to WtInput/Hidden [0][j] thechange ΔWih [0][j]
for all elements in input layer [ NumUnitInput ]
do
Add to ΔWih [i][j] the sum of product β * InputHidden [p][i] * ΔH [j] to theproduct:α
* ΔWih [i][j]
Add to WtInput/Hidden [i][j] the change ΔWih [i][j]
end for
end for
for all elements in output layer [ NumUnitOutput ]
do // This loop updates the weight hidden to output //
Add to ΔWho [0][k] the sum of: product β * ΔOutput[k] to theproduct:α* ΔWho [0][k]
Add to WtHidden/Output [0][k] thechange ΔWho [0][k]
for all elements in hidden layer [ NumUnitHidden ]
do
Add to ΔWho [j][k] the sum of product β * OutputHidden [p][j] * ΔOutput[k] to the
product:α *ΔWho [j][k]
Add to WtHidden/Output [j][k] the change ΔWho [j][k]
end for
end for
5. IV. RESULTS AND DISCUSSION
A. Comparisonsof Accuracies of Different
Algorithms
In this section, we will take a look at the accuracies
offered by different algorithms. We take into
consideration four algorithms: Majority Algorithm,
Decision Tree Algorithm, Perceptrons Learning
Algorithm, Three layered Neural Network based on
backpropagation algorithm. Two data sets are
considered that have been studied in detail in
earlier sections. The first is the E.coli data set
which is for the E.coli cell and the second is the
one chosen by us: the Yeast cell data set. As we can
see from the chart below our algorithm is able to
achieve slightly higher accuracy than the rest of the
algorithms. Another thing of note is to see that
considerable success is achieved in the yeast data
set which we chose to implement with accuracy
leading up to 61%
Figure 4: Plot of Accuracy of various algorithms
for two data sets.
B. Variation of Success Rate with number of
iterations
Let us consider the variation of success rate in our
implementation. Success Rate is simply defined as
number of successful predictions divided by total
number of cases handled. The overall success rate
will vary with number of iterations of training the
neural network. As the number of iterations, the
error is reduced as the network learns with every
training session. We can look at the chart below to
find the expected variation of success rate as it rises
with number of iterations. However a thing to
consider is that after about 100 iterations the
success rate remains constant more or less.
Figure 5: Plot of Success Rate with number of
iterations
C. Variation of Success Rate with number of
processing elements in Hidden Layer
Let us consider now varying another important
parameter in our neural network. We shall again
consider the success rate defined in the previous
section. The number of processing elements is
under our control in the network. As the data set we
have chosen is specifying the number of input
attributes and possible outcomes the input and
output layer have fixed number of processing
elements. However we can see the variation of
success rate with number of elements in the hidden
layer. Note that the success rate reaches a constant
value after about 75 elements in the layer.
Figure 6: Plot of Success Rate with No. of PE in
Hidden Layer
V. CONCLUSIONS AND FUTURE WORK
A. Conclusion
6. In this paper, I implemented the machine learning
algorithm of three-layer feed forward network. I
applied it to the problem of classifying proteins to
their cellular localization sites based on the amino
acid sequences of proteins. The Yeast dataset’s
accuracy was compared with the E.coli dataset’s
accuracy. It was tested whether the three-layer
neural network with hidden nodes is able to
separate the datasets. We also explored using larger
number of hidden nodes in the network. We also
implemented three layer feed-forward neural
network which represented discontinuous function.
After obtaining results, we compared the
performance with other networks and techniques
used for predicting the cellular localization of
proteins. The most important results can be
summarized as:
● The classes CYT, NUC and MIT have the
largest number of instances.
● The back propagation algorithm is able to
achieve slightly higher accuracy than the
rest of the algorithms.
● Another thing of note is to see that
considerable success is achieved in the
yeast data set which we chose to
implement with accuracy leading up to
61%
● After about 100 iterations the success rate
remains constant more or less.
● The success rate reaches a constant value
after about 75 elements in the layer.
● The Accuracy rises till we reach the limit
to which we can set the success rate.
B. Future Work
Since the prediction of proteins’ cellular
localization sites is a typical classification problem,
many other techniques such as probability model,
Bayesian network, K-nearest neighbours etc, can be
compared with our technique.
Thus, an aspect of future work is to examine the
performance of these techniques on this particular
problem.
ACKNOWLEDGEMENT
I would like to acknowledge the contribution of Dr.
Durga Toshniwal, Associate Professor, Department
of Computer Science and Engineering, IIT
Roorkee, whose guidance was indispensable
throughout the course of this work.
REFERENCES
[1]. "A ProbablisticClassificationSystemfor Predictingthe
Cellular Localization Sites of Proteins", Paul Horton & Kenta
Nakai, Intelligent Systems in MolecularBiology, 109-115.
[2]. "Expert Sytem for PredictingProtein Localization Sites in
Gram-NegativeBacteria", Kenta Nakai & MinoruKanehisa,
PROTEINS: Structure,Function, andGenetics 11:95-110, 1991.
[3]. "A Knowledge Base for PredictingProteinLocalization
Sites in Eukaryotic Cells", Kenta Nakai & MinoruKanehisa,
Genomics 14:897-911, 1992.
[4]. Cairns, P. Huyck,et.al, A Comparisonof Categorization
Algorithms for Predictingthe Cellular LocalizationSites of
Proteins, IEEEEngineeringin Medicine andBiology,
pp.296-300, 2001.
[5]. Donnes, P., andHoglund, A.,Predictingproteinsubcellular
localization: Past, present, andfutureGenomics Proteomics
Bioinformatics, 2:209-215, 2004.