This explains the general algorithmic flow which goes into developing a Neural Network ensemble hybridized with evolutionary optimization schemes which are targeted in optimizing more than one cost function.
The document proposes using an artificial neural network with a modified backpropagation algorithm for load forecasting. It describes developing a model to forecast electrical load for the next 24 hours on a daily basis. The neural network is trained using historical load data from a load dispatch center. Once trained, the network can generate daily load forecasts. The document provides background on artificial neural networks, including their structure of interconnected processing units inspired by biological neurons, and how they are trained through a process of backward propagation of errors.
This paper presents a study using an artificial neural network (ANN) for load forecasting in the smart grid. Specifically, it uses a backpropagation network to forecast electricity load in Ontario, Canada based on weather and other input data. The paper describes collecting hourly load and weather data over two years, normalizing the data, creating a three-layer backpropagation network with different numbers of neurons, training the network using two algorithms, and testing the network on a separate data set to analyze forecast accuracy. The results show the ANN approach is able to accurately forecast electricity load based on the input factors.
Abstract Face recognition is a form of computer vision that uses faces to identify a person or verify a person’s claimed identity. In this paper, a neural based algorithm is presented, to detect frontal views of faces. The dimensionality of input face image is reduced by the Principal component analysis and the Classification is by the neural back propagation network. This method is robust for a dataset of 300 face images and has better performance in terms of 80 – 90 % recognition rate.
The document outlines an introduction to machine learning course consisting of two parts: neural networks and fuzzy systems. It discusses key machine learning concepts like supervised learning, unsupervised learning, reinforcement learning, classification, regression, and clustering. Supervised learning involves comparing model outputs to correct outputs and adjusting parameters accordingly. Unsupervised learning adapts to input patterns when correct outputs are unknown. Reinforcement learning provides feedback on incorrect outputs. The document also lists examples of machine learning problems, techniques, models and technologies.
Forecasting of Sales using Neural network techniquesHitesh Dua
This document discusses using neural network techniques for sales forecasting. It begins by defining sales forecasting and explaining its need in areas like human resources, R&D, marketing, finance, production and purchasing. The document then outlines the sales forecasting process including setting goals, data gathering, analysis, mining, and applying neural network models. It describes the basic concepts of artificial neural networks and different neural network models like feed forward, recurrent, and backpropagation. It provides details on how these models work, especially explaining the backpropagation training algorithm and how it minimizes network error through forward and backward passes. Finally, it lists several references for further information.
Neural networks for the prediction and forecasting of water resources variablesJonathan D'Cruz
This document reviews the use of artificial neural networks (ANNs) for predicting and forecasting water resource variables. It outlines the key steps in developing ANN models, including choosing performance criteria, preprocessing and dividing data, determining appropriate model inputs and network architecture, optimizing connection weights through training, and validating models. Specifically, it focuses on feedforward networks with sigmoid transfer functions, which have been most widely used for predicting water resources variables.
1) Artificial neural networks are made up of nodes that pass signals through connection links. Each node applies an activation function to determine its output signal.
2) Neural networks can be classified based on number of layers (single, bi-layer, multi-layer) or direction of information flow (feed forward, recurrent).
3) Backpropagation is commonly used for training, which involves passing inputs forward and propagating errors backward to adjust weights. Other algorithms like conjugate gradient and radial basis function training also exist.
Artificial neural network for load forecasting in smart gridEhsan Zeraatparvar
1) The document discusses using an artificial neural network for load forecasting in smart grids. It outlines the objectives, existing forecasting methods, and advantages of using artificial neural networks (ANNs).
2) It proposes using different types of ANNs including feed-forward and feedback networks. It describes how to structure, train, and optimize ANNs using backpropagation.
3) The results section shows the ANN was able to accurately forecast load based on historical load and weather data from Ontario, Canada. It concludes that more neurons and additional training data can improve forecasting accuracy for smart grid load forecasting.
The document proposes using an artificial neural network with a modified backpropagation algorithm for load forecasting. It describes developing a model to forecast electrical load for the next 24 hours on a daily basis. The neural network is trained using historical load data from a load dispatch center. Once trained, the network can generate daily load forecasts. The document provides background on artificial neural networks, including their structure of interconnected processing units inspired by biological neurons, and how they are trained through a process of backward propagation of errors.
This paper presents a study using an artificial neural network (ANN) for load forecasting in the smart grid. Specifically, it uses a backpropagation network to forecast electricity load in Ontario, Canada based on weather and other input data. The paper describes collecting hourly load and weather data over two years, normalizing the data, creating a three-layer backpropagation network with different numbers of neurons, training the network using two algorithms, and testing the network on a separate data set to analyze forecast accuracy. The results show the ANN approach is able to accurately forecast electricity load based on the input factors.
Abstract Face recognition is a form of computer vision that uses faces to identify a person or verify a person’s claimed identity. In this paper, a neural based algorithm is presented, to detect frontal views of faces. The dimensionality of input face image is reduced by the Principal component analysis and the Classification is by the neural back propagation network. This method is robust for a dataset of 300 face images and has better performance in terms of 80 – 90 % recognition rate.
The document outlines an introduction to machine learning course consisting of two parts: neural networks and fuzzy systems. It discusses key machine learning concepts like supervised learning, unsupervised learning, reinforcement learning, classification, regression, and clustering. Supervised learning involves comparing model outputs to correct outputs and adjusting parameters accordingly. Unsupervised learning adapts to input patterns when correct outputs are unknown. Reinforcement learning provides feedback on incorrect outputs. The document also lists examples of machine learning problems, techniques, models and technologies.
Forecasting of Sales using Neural network techniquesHitesh Dua
This document discusses using neural network techniques for sales forecasting. It begins by defining sales forecasting and explaining its need in areas like human resources, R&D, marketing, finance, production and purchasing. The document then outlines the sales forecasting process including setting goals, data gathering, analysis, mining, and applying neural network models. It describes the basic concepts of artificial neural networks and different neural network models like feed forward, recurrent, and backpropagation. It provides details on how these models work, especially explaining the backpropagation training algorithm and how it minimizes network error through forward and backward passes. Finally, it lists several references for further information.
Neural networks for the prediction and forecasting of water resources variablesJonathan D'Cruz
This document reviews the use of artificial neural networks (ANNs) for predicting and forecasting water resource variables. It outlines the key steps in developing ANN models, including choosing performance criteria, preprocessing and dividing data, determining appropriate model inputs and network architecture, optimizing connection weights through training, and validating models. Specifically, it focuses on feedforward networks with sigmoid transfer functions, which have been most widely used for predicting water resources variables.
1) Artificial neural networks are made up of nodes that pass signals through connection links. Each node applies an activation function to determine its output signal.
2) Neural networks can be classified based on number of layers (single, bi-layer, multi-layer) or direction of information flow (feed forward, recurrent).
3) Backpropagation is commonly used for training, which involves passing inputs forward and propagating errors backward to adjust weights. Other algorithms like conjugate gradient and radial basis function training also exist.
Artificial neural network for load forecasting in smart gridEhsan Zeraatparvar
1) The document discusses using an artificial neural network for load forecasting in smart grids. It outlines the objectives, existing forecasting methods, and advantages of using artificial neural networks (ANNs).
2) It proposes using different types of ANNs including feed-forward and feedback networks. It describes how to structure, train, and optimize ANNs using backpropagation.
3) The results section shows the ANN was able to accurately forecast load based on historical load and weather data from Ontario, Canada. It concludes that more neurons and additional training data can improve forecasting accuracy for smart grid load forecasting.
Neural Network and Artificial Intelligence.
Neural Network and Artificial Intelligence.
WHAT IS NEURAL NETWORK?
The method calculation is based on the interaction of plurality of processing elements inspired by biological nervous system called neurons.
It is a powerful technique to solve real world problem.
A neural network is composed of a number of nodes, or units[1], connected by links. Each linkhas a numeric weight[2]associated with it. .
Weights are the primary means of long-term storage in neural networks, and learning usually takes place by updating the weights.
Artificial neurons are the constitutive units in an artificial neural network.
WHY USE NEURAL NETWORKS?
It has ability to Learn from experience.
It can deal with incomplete information.
It can produce result on the basis of input, has not been taught to deal with.
It is used to extract useful pattern from given data i.e. pattern Recognition etc.
Biological Neurons
Four parts of a typical nerve cell :• DENDRITES: Accepts the inputs• SOMA : Process the inputs• AXON : Turns the processed inputs into outputs.• SYNAPSES : The electrochemical contactbetween the neurons.
ARTIFICIAL NEURONS MODEL
Inputs to the network arerepresented by the x1mathematical symbol, xn
Each of these inputs are multiplied by a connection weight , wn
sum = w1 x1 + ……+ wnxn
These products are simplysummed, fed through the transfer function, f( ) to generate a result and then output.
NEURON MODEL
Neuron Consist of:
Inputs (Synapses): inputsignal.Weights (Dendrites):determines the importance ofincoming value.Output (Axon): output toother neuron or of NN .
Neural Network Classification and its Applications in Insurance IndustryInderjeet Singh
This document summarizes the use of neural networks for classification tasks. It discusses the advantages and disadvantages of neural networks for classification. It also presents a case study on using a neural network to classify insurance customers as likely to renew or terminate their policies based on attributes like age and zip code. The neural network achieved higher accuracy than decision trees and regression analysis on the insurance data set.
The document discusses network design and training issues for artificial neural networks. It covers architecture of the network including number of layers and nodes, learning rules, and ensuring optimal training. It also discusses data preparation including consolidation, selection, preprocessing, transformation and encoding of data before training the network.
A Time Series ANN Approach for Weather Forecastingijctcm
Weather forecasting is most challenging problem around the world. There are various reason because of its experimented values in meteorology, but it is also a typical unbiased time series forecasting problem in scientific research. A lots of methods proposed by various scientists. The motive behind research is to predict more accurate. This paper contribute the same using artificial neural network (ANN) and simulated in MATLAB to predict two important weather parameters i.e. maximum and minimum temperature. The model has been trained using past 60 years of real data collected from(1901-1960) and tested over 40 years to forecast maximum and minimum temperature. The results based on mean square error function (MSE) confirm, this model which is based on multilayer perceptron has the potential to successful application to weather forecasting
How to create a neural network that detects people wearing masks. Ultimate description, the A-to-Z workflow for creating a neural network that recognizes images.
A short intro to the paper: https://blog.fulcrum.rocks/neural-network-image-recognition
Hyper-parameter optimization of convolutional neural network based on particl...journalBEEI
The document proposes using a particle swarm optimization (PSO) algorithm to optimize the hyperparameters of a convolutional neural network (CNN) for image classification. The PSO algorithm is used to find optimal values for CNN hyperparameters like the number and size of convolutional filters. In experiments on the MNIST handwritten digit dataset, the optimized CNN achieved a testing error rate of 0.87%, which is competitive with state-of-the-art models. The proposed approach finds optimized CNN architectures automatically without requiring manual design or encoding strategies during training.
High dimesional data (FAST clustering ALG) PPTdeepan v
The document presents a feature selection algorithm called FAST (Fast clustering-based feature selection algorithm). FAST uses minimum spanning trees and clustering to identify relevant feature subsets while removing irrelevant and redundant features. This achieves dimensionality reduction and improves the accuracy of learning algorithms. The algorithm was experimentally evaluated on datasets with over 10,000 features and was shown to outperform other feature selection methods in terms of time complexity and selected feature proportions.
This document introduces neural networks and their applications. It discusses how neural networks simulate the human brain using processing nodes and weights to learn from patterns in data. Applications include prediction, pattern detection, and classification. The document also provides an overview of neural network theory, architecture, learning process, and development tools. It notes benefits like handling nonlinear problems and noisy data, as well as limitations such as the "black box" nature and lack of explainability.
The document discusses using a convolutional neural network to recognize handwritten digits from the MNIST database. It describes training a CNN on the MNIST training dataset, consisting of 60,000 examples, to classify images of handwritten digits from 0-9. The CNN architecture uses two convolutional layers followed by a flatten layer and fully connected layer with softmax activation. The model achieves high accuracy on the MNIST test set. However, the document notes that the model may struggle with color images or images with more complex backgrounds compared to the simple black and white MNIST digits. Improving preprocessing and adapting the model for more complex real-world images is suggested for future work.
International Refereed Journal of Engineering and Science (IRJES)irjes
The core of the vision IRJES is to disseminate new knowledge and technology for the benefit of all, ranging from academic research and professional communities to industry professionals in a range of topics in computer science and engineering. It also provides a place for high-caliber researchers, practitioners and PhD students to present ongoing research and development in these areas.
This document discusses using an artificial neural network to forecast power loads by taking the University of Lagos as a sample space. It involves gathering and arranging historical load data, determining an appropriate network type and topology, training the network using an algorithm, and analyzing the results to test the network's accuracy in predicting loads. The methodology includes randomizing and tagging the training data, experimenting to determine the network topology, training with cross-validation, and performing sensitivity and mean squared error analysis on the network.
An artificial neural network is a mathematical model that maps inputs to outputs. It consists of an input layer, hidden layers, and an output layer connected by weights and biases. Activation functions determine the output of each node. Training a neural network involves adjusting the weights and biases through backpropagation to minimize a loss function and improve predictions based on the input data. Feedforward involves calculating predictions, while backpropagation calculates gradients to update weights and biases through gradient descent.
This presentation provides an introduction to the artificial neural networks topic, its learning, network architecture, back propagation training algorithm, and its applications.
1) Artificial neural networks (ANNs) are processing systems inspired by biological neural networks, consisting of interconnected nodes that process information via algorithms or hardware components. ANNs can accurately model functions like visual processing in the retina.
2) ANNs are useful for problems like facial recognition that are difficult to solve with algorithms due to their ability to learn from examples in a way similar to the human brain.
3) ANNs have many applications, including pattern recognition, modeling complex relationships in large datasets, and real-time systems due to their parallel architecture.
AN EFFICIENT PSO BASED ENSEMBLE CLASSIFICATION MODEL ON HIGH DIMENSIONAL DATA...ijsc
As the size of the biomedical databases are growing day by day, finding an essential features in the disease prediction have become more complex due to high dimensionality and sparsity problems. Also, due to the
availability of a large number of micro-array datasets in the biomedical repositories, it is difficult to analyze, predict and interpret the feature information using the traditional feature selection based classification models. Most of the traditional feature selection based classification algorithms have computational issues such as dimension reduction, uncertainty and class imbalance on microarray datasets. Ensemble classifier is one of the scalable models for extreme learning machine due to its high efficiency, the fast processing speed for real-time applications. The main objective of the feature selection
based ensemble learning models is to classify the high dimensional data with high computational efficiency
and high true positive rate on high dimensional datasets. In this proposed model an optimized Particle swarm optimization (PSO) based Ensemble classification model was developed on high dimensional microarray
datasets. Experimental results proved that the proposed model has high computational efficiency compared to the traditional feature selection based classification models in terms of accuracy , true positive rate and error rate are concerned.
A fast clustering based feature subset selection algorithm for high-dimension...IEEEFINALYEARPROJECTS
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09849539085, 09966235788 or mail us - ieeefinalsemprojects@gmail.co¬m-Visit Our Website: www.finalyearprojects.org
This document summarizes a research paper that proposes a method for detecting and recognizing faces using the Viola Jones algorithm and Back Propagation Neural Network (BPNN).
The paper first discusses face detection and recognition challenges. It then provides background on Viola Jones algorithm and BPNN. The proposed methodology uses Viola Jones for face detection, converts the image to grayscale and binary, then trains segments or the whole image with BPNN. Results are analyzed using training, testing and validation curves in the MATLAB neural network tool to minimize error. In under 3 sentences, this document outlines the key techniques, proposed method, and analysis approach discussed in the source research paper.
There is a long history of relating to the recognition of facial expressions of emotion that can be traced back to Darwin in the late 1800’s. Darwin considered that facial expressions of emotion was an innate, adaptive, and physiological response which could provide evidence of an individual’s internal mental state. There were various early ways of measuring internal mental states that included attempts at accuracy in the measurement of the facial muscle movement
This document summarizes a research paper that proposes a neural AdaBoost-based facial expression recognition system. The system uses Viola-Jones detection, Bessel transform downsampling, Gabor feature extraction, AdaBoost feature selection, and a multi-layer neural network classifier. The system was tested on the JAFFE and Yale facial expression databases, achieving average recognition rates of 96.83% and 92.2% respectively. Execution time for 100x100 pixel images was 14.5ms.
Neural Network and Artificial Intelligence.
Neural Network and Artificial Intelligence.
WHAT IS NEURAL NETWORK?
The method calculation is based on the interaction of plurality of processing elements inspired by biological nervous system called neurons.
It is a powerful technique to solve real world problem.
A neural network is composed of a number of nodes, or units[1], connected by links. Each linkhas a numeric weight[2]associated with it. .
Weights are the primary means of long-term storage in neural networks, and learning usually takes place by updating the weights.
Artificial neurons are the constitutive units in an artificial neural network.
WHY USE NEURAL NETWORKS?
It has ability to Learn from experience.
It can deal with incomplete information.
It can produce result on the basis of input, has not been taught to deal with.
It is used to extract useful pattern from given data i.e. pattern Recognition etc.
Biological Neurons
Four parts of a typical nerve cell :• DENDRITES: Accepts the inputs• SOMA : Process the inputs• AXON : Turns the processed inputs into outputs.• SYNAPSES : The electrochemical contactbetween the neurons.
ARTIFICIAL NEURONS MODEL
Inputs to the network arerepresented by the x1mathematical symbol, xn
Each of these inputs are multiplied by a connection weight , wn
sum = w1 x1 + ……+ wnxn
These products are simplysummed, fed through the transfer function, f( ) to generate a result and then output.
NEURON MODEL
Neuron Consist of:
Inputs (Synapses): inputsignal.Weights (Dendrites):determines the importance ofincoming value.Output (Axon): output toother neuron or of NN .
Neural Network Classification and its Applications in Insurance IndustryInderjeet Singh
This document summarizes the use of neural networks for classification tasks. It discusses the advantages and disadvantages of neural networks for classification. It also presents a case study on using a neural network to classify insurance customers as likely to renew or terminate their policies based on attributes like age and zip code. The neural network achieved higher accuracy than decision trees and regression analysis on the insurance data set.
The document discusses network design and training issues for artificial neural networks. It covers architecture of the network including number of layers and nodes, learning rules, and ensuring optimal training. It also discusses data preparation including consolidation, selection, preprocessing, transformation and encoding of data before training the network.
A Time Series ANN Approach for Weather Forecastingijctcm
Weather forecasting is most challenging problem around the world. There are various reason because of its experimented values in meteorology, but it is also a typical unbiased time series forecasting problem in scientific research. A lots of methods proposed by various scientists. The motive behind research is to predict more accurate. This paper contribute the same using artificial neural network (ANN) and simulated in MATLAB to predict two important weather parameters i.e. maximum and minimum temperature. The model has been trained using past 60 years of real data collected from(1901-1960) and tested over 40 years to forecast maximum and minimum temperature. The results based on mean square error function (MSE) confirm, this model which is based on multilayer perceptron has the potential to successful application to weather forecasting
How to create a neural network that detects people wearing masks. Ultimate description, the A-to-Z workflow for creating a neural network that recognizes images.
A short intro to the paper: https://blog.fulcrum.rocks/neural-network-image-recognition
Hyper-parameter optimization of convolutional neural network based on particl...journalBEEI
The document proposes using a particle swarm optimization (PSO) algorithm to optimize the hyperparameters of a convolutional neural network (CNN) for image classification. The PSO algorithm is used to find optimal values for CNN hyperparameters like the number and size of convolutional filters. In experiments on the MNIST handwritten digit dataset, the optimized CNN achieved a testing error rate of 0.87%, which is competitive with state-of-the-art models. The proposed approach finds optimized CNN architectures automatically without requiring manual design or encoding strategies during training.
High dimesional data (FAST clustering ALG) PPTdeepan v
The document presents a feature selection algorithm called FAST (Fast clustering-based feature selection algorithm). FAST uses minimum spanning trees and clustering to identify relevant feature subsets while removing irrelevant and redundant features. This achieves dimensionality reduction and improves the accuracy of learning algorithms. The algorithm was experimentally evaluated on datasets with over 10,000 features and was shown to outperform other feature selection methods in terms of time complexity and selected feature proportions.
This document introduces neural networks and their applications. It discusses how neural networks simulate the human brain using processing nodes and weights to learn from patterns in data. Applications include prediction, pattern detection, and classification. The document also provides an overview of neural network theory, architecture, learning process, and development tools. It notes benefits like handling nonlinear problems and noisy data, as well as limitations such as the "black box" nature and lack of explainability.
The document discusses using a convolutional neural network to recognize handwritten digits from the MNIST database. It describes training a CNN on the MNIST training dataset, consisting of 60,000 examples, to classify images of handwritten digits from 0-9. The CNN architecture uses two convolutional layers followed by a flatten layer and fully connected layer with softmax activation. The model achieves high accuracy on the MNIST test set. However, the document notes that the model may struggle with color images or images with more complex backgrounds compared to the simple black and white MNIST digits. Improving preprocessing and adapting the model for more complex real-world images is suggested for future work.
International Refereed Journal of Engineering and Science (IRJES)irjes
The core of the vision IRJES is to disseminate new knowledge and technology for the benefit of all, ranging from academic research and professional communities to industry professionals in a range of topics in computer science and engineering. It also provides a place for high-caliber researchers, practitioners and PhD students to present ongoing research and development in these areas.
This document discusses using an artificial neural network to forecast power loads by taking the University of Lagos as a sample space. It involves gathering and arranging historical load data, determining an appropriate network type and topology, training the network using an algorithm, and analyzing the results to test the network's accuracy in predicting loads. The methodology includes randomizing and tagging the training data, experimenting to determine the network topology, training with cross-validation, and performing sensitivity and mean squared error analysis on the network.
An artificial neural network is a mathematical model that maps inputs to outputs. It consists of an input layer, hidden layers, and an output layer connected by weights and biases. Activation functions determine the output of each node. Training a neural network involves adjusting the weights and biases through backpropagation to minimize a loss function and improve predictions based on the input data. Feedforward involves calculating predictions, while backpropagation calculates gradients to update weights and biases through gradient descent.
This presentation provides an introduction to the artificial neural networks topic, its learning, network architecture, back propagation training algorithm, and its applications.
1) Artificial neural networks (ANNs) are processing systems inspired by biological neural networks, consisting of interconnected nodes that process information via algorithms or hardware components. ANNs can accurately model functions like visual processing in the retina.
2) ANNs are useful for problems like facial recognition that are difficult to solve with algorithms due to their ability to learn from examples in a way similar to the human brain.
3) ANNs have many applications, including pattern recognition, modeling complex relationships in large datasets, and real-time systems due to their parallel architecture.
AN EFFICIENT PSO BASED ENSEMBLE CLASSIFICATION MODEL ON HIGH DIMENSIONAL DATA...ijsc
As the size of the biomedical databases are growing day by day, finding an essential features in the disease prediction have become more complex due to high dimensionality and sparsity problems. Also, due to the
availability of a large number of micro-array datasets in the biomedical repositories, it is difficult to analyze, predict and interpret the feature information using the traditional feature selection based classification models. Most of the traditional feature selection based classification algorithms have computational issues such as dimension reduction, uncertainty and class imbalance on microarray datasets. Ensemble classifier is one of the scalable models for extreme learning machine due to its high efficiency, the fast processing speed for real-time applications. The main objective of the feature selection
based ensemble learning models is to classify the high dimensional data with high computational efficiency
and high true positive rate on high dimensional datasets. In this proposed model an optimized Particle swarm optimization (PSO) based Ensemble classification model was developed on high dimensional microarray
datasets. Experimental results proved that the proposed model has high computational efficiency compared to the traditional feature selection based classification models in terms of accuracy , true positive rate and error rate are concerned.
A fast clustering based feature subset selection algorithm for high-dimension...IEEEFINALYEARPROJECTS
To Get any Project for CSE, IT ECE, EEE Contact Me @ 09849539085, 09966235788 or mail us - ieeefinalsemprojects@gmail.co¬m-Visit Our Website: www.finalyearprojects.org
This document summarizes a research paper that proposes a method for detecting and recognizing faces using the Viola Jones algorithm and Back Propagation Neural Network (BPNN).
The paper first discusses face detection and recognition challenges. It then provides background on Viola Jones algorithm and BPNN. The proposed methodology uses Viola Jones for face detection, converts the image to grayscale and binary, then trains segments or the whole image with BPNN. Results are analyzed using training, testing and validation curves in the MATLAB neural network tool to minimize error. In under 3 sentences, this document outlines the key techniques, proposed method, and analysis approach discussed in the source research paper.
There is a long history of relating to the recognition of facial expressions of emotion that can be traced back to Darwin in the late 1800’s. Darwin considered that facial expressions of emotion was an innate, adaptive, and physiological response which could provide evidence of an individual’s internal mental state. There were various early ways of measuring internal mental states that included attempts at accuracy in the measurement of the facial muscle movement
This document summarizes a research paper that proposes a neural AdaBoost-based facial expression recognition system. The system uses Viola-Jones detection, Bessel transform downsampling, Gabor feature extraction, AdaBoost feature selection, and a multi-layer neural network classifier. The system was tested on the JAFFE and Yale facial expression databases, achieving average recognition rates of 96.83% and 92.2% respectively. Execution time for 100x100 pixel images was 14.5ms.
A facial recognition system automatically identifies or verifies a person from images or video by comparing their facial features to a database. It started being researched in the 1960s and is now used for security systems. Early 2D systems had low accuracy due to lighting and expressions, while newer 3D systems can recognize faces from different angles unaffected by these factors. Facial recognition involves image acquisition, pre-processing, feature extraction to describe the face, classification of expressions, and post-processing. Challenges include pose, environment clutter, illumination, and facial variability between individuals. More research is still needed to develop robust systems unaffected by data variability.
This document outlines a research project on designing an automatic system to distinguish facial expressions. It presents an introduction discussing the importance and challenges of facial expression recognition. It provides an outline of the proposed system including aims to use programming for design and implementation. It discusses the basic structure of facial expression analysis and concludes the objective is to analyze facial expressions through steps like feature extraction and expression classification.
This document provides an overview of facial recognition systems. It discusses what facial recognition is, which is a computer system that identifies people by analyzing facial features from images and video. It explains that facial recognition started in the 1960s and is now commonly used for security applications. The document outlines different approaches to facial recognition, including two-dimensional and three-dimensional systems, and describes some of the techniques involved like feature extraction and classification. It also provides details about a specific facial recognition product called FA007 and its uses for access control.
1. The document discusses face recognition using an eigenface approach, which uses principal component analysis to extract features from a database of faces to generate eigenfaces that can be used to identify unknown faces.
2. The eigenface approach takes into account the entire face for recognition and is relatively insensitive to small changes in faces. It is faster, simpler, and has better learning capabilities compared to other approaches.
3. Some limitations are that accuracy is affected if lighting and face position vary greatly, it only works with grayscale images, and noisy or partially occluded faces decrease recognition performance.
2. NEURAL NETWORKS USING GENETIC ALGORITHMS.pptxssuser67281d
This document discusses using genetic algorithms to train neural networks. It begins by defining evolutionary artificial neural networks as combining neural networks with genetic algorithms. Genetic algorithms can be used to choose neural network structures and properties like neuron functions. The document then provides background on neural networks and genetic algorithms. It describes how genetic algorithms use selection, crossover and mutation to optimize solutions over generations. The document proposes using a genetic algorithm to train neural network weights and applies this approach to the traveling salesman problem. It concludes that while these techniques are powerful, they also have limitations as "black boxes" that require pre-processing of inputs.
AN IMPROVED METHOD FOR IDENTIFYING WELL-TEST INTERPRETATION MODEL BASED ON AG...IAEME Publication
This paper presents an approach based on applying an aggregated predictor formed by multiple versions of a multilayer neural network with a back-propagation optimization algorithm for helping the engineer to get a list of the most appropriate well-test interpretation models for a given set of pressure/ production data. The proposed method consists of three stages: (1) data decorrelation through principal component analysis to reduce the covariance between the variables and the dimension of the input layer in the artificial neural network, (2) bootstrap replicates of the learning set where the data is repeatedly sampled with a random split of the data into train sets and using these as new learning sets, and (3) automatic reservoir model identification through aggregated predictor formed by a plurality vote when predicting a new class. This method is described in detail to ensure successful replication of results. The required training and test dataset were generated by using analytical solution models. In our case, there were used 600 samples: 300 for training, 100 for cross-validation, and 200 for testing. Different network structures were tested during this study to arrive at optimum network design. We notice that the single net methodology always brings about confusion in selecting the correct model even though the training results for the constructed networks are close to 1. We notice also that the principal component analysis is an effective strategy in reducing the number of input features, simplifying the network structure, and lowering the training time of the ANN. The results obtained show that the proposed model provides better performance when predicting new data with a coefficient of correlation approximately equal to 95% Compared to a previous approach 80%, the combination of the PCA and ANN is more stable and determine the more accurate results with lesser computational complexity than was feasible previously. Clearly, the aggregated predictor is more stable and shows less bad classes compared to the previous approach.
Pruning convolutional neural networks for resource efficient inferenceKaushalya Madhawa
The document discusses a method for pruning convolutional neural networks to make them more efficient for resource-constrained inference. The method uses a Taylor expansion to calculate the saliency of parameters, allowing it to prune those with the least effect on the network's loss. Experiments on networks like VGG-16 and AlexNet show the method can significantly reduce operations with little loss in accuracy. Layer-wise analysis provides insight into each layer's importance to the overall network.
This document discusses the use of artificial neural networks (ANNs) for process control applications. It covers several key topics:
1) ANNs can model nonlinear systems through parallel processing and learning algorithms like backpropagation. Multi-layer neural networks are commonly used for pattern recognition and control.
2) Various ANN-based control configurations are described, including direct inverse control, direct adaptive control, and internal model control.
3) Learning algorithms like backpropagation and applications like system identification, modeling, fault detection, and temperature control are discussed.
4) The document concludes that multi-layer neural networks trained with backpropagation are well-suited for process identification and control, as they can handle nonlinearity using
Black-box modeling of nonlinear system using evolutionary neural NARX modelIJECEIAES
Nonlinear systems with uncertainty and disturbance are very difficult to model using mathematic approach. Therefore, a black-box modeling approach without any prior knowledge is necessary. There are some modeling approaches have been used to develop a black box model such as fuzzy logic, neural network, and evolution algorithms. In this paper, an evolutionary neural network by combining a neural network and a modified differential evolution algorithm is applied to model a nonlinear system. The feasibility and effectiveness of the proposed modeling are tested on a piezoelectric actuator SISO system and an experimental quadruple tank MIMO system.
The document provides an overview of neural networks for data mining. It discusses how neural networks can be used for classification tasks in data mining. It describes the structure of a multi-layer feedforward neural network and the backpropagation algorithm used for training neural networks. The document also discusses techniques like neural network pruning and rule extraction that can optimize neural network performance and interpretability.
This document summarizes research on improving image classification results using neural networks. It compares common image classification methods like support vector machines (SVM) and K-nearest neighbors (KNN). It then evaluates the performance of multilayer perceptron (MLP) neural networks and radial basis function (RBF) neural networks on image classification. The document tests various configurations of MLP and RBF networks on a dataset containing 2310 images across 7 classes. It finds that a MLP network with two hidden layers of 10 neurons each achieves the best results, with an average accuracy of 98.84%. This is significantly higher than the 84.47% average accuracy of RBF networks and outperforms KNN classification as well. The research concludes that neural
This document outlines the course details for Deep Learning for Data Science at SRM Institute of Science and Technology. The course is divided into 5 units that cover topics such as introduction to neural networks, artificial neural network architectures, neural network models like perceptrons and multilayer perceptrons, backpropagation algorithm, regularization techniques, convolutional neural networks, and reinforcement learning. The document provides an overview of the topics to be discussed each week for the different units.
The document presents research on using neural networks to predict Earth Orientation Parameters (EOP) such as UT1-TAI. Three neural network models were tested:
1) Network 1 varied the number of neurons proportionally with increasing training sample size.
2) Network 2 kept the number of neurons constant while increasing sample size.
3) Network 3 used daily training data with 2 neurons and sample sizes of 4, 10, 20, and 365 days.
The goal was to minimize prediction error (RMSE) for horizons of 5-25 days by adjusting sample size and neurons. Results showed the best balance was needed between these factors, and that short-term prediction was possible within 10 days using
Artificial Neural Networks (ANNS) For Prediction of California Bearing Ratio ...IJMER
The behaviour of soil at the location of the project and interactions of the earth materials during and after construction has a major influence on the success, economy and safety of the work. Another complexity associated with some geotechnical engineering materials, such as sand and gravel, is the difficulty in obtaining undisturbed samples and time consuming involving skilled
technician. Knowledge of California Bearing Ratio (C.B.R) is essential in finding the road thickness. To cope up with the difficulties involved, an attempt has been made to model C.B.R in terms of Fine Fraction, Liquid Limit, Plasticity Index, Maximum Dry density, and Optimum Moisture content. A multi-layer perceptron network with feed forward back propagation is used to model varying the
number of hidden layers. For this purposes 50 soils test data was collected from the laboratory test
results. Among the test data 30 soils data is used for training and remaining 20 soils for testing using
60-40 distribution. The architectures developed are 5-4-1, 5-5-1, and 5-6-1. Model with 5-6-1 architecture is found to be quite satisfactory in predicting C.B.R of soils. A graph is plotted between
the predicted values and observed values of outputs for training and testing process, from the graph it
is found that all the points are close to equality line, indicating predicted values are close to observed
values
Classification Of Iris Plant Using Feedforward Neural Networkirjes
The classification and recognition of type on the basis of individual features and behaviors constitute
a preliminary measure and is an important target in the behavioral sciences. Current statistical methods do not
always yield satisfactory answers. A Feed Forward Artificial Neural Network is the computer model inspired by
the structure of the Human Brain. It views as in the set of artificial nerve cells that are interconnected with the
other neurons. The primary aim of this paper is to demonstrate the process of developing the Artificial Neural
network based classifier which classifies the Iris database. The problem concerns the identification of Iris plant
species on the basis of plant attribute measurements. This paper is related to the use of feed forward neural
networks towards the identification of iris plants on the basis of the following measurements: sepal length, sepal
width, petal length, and petal width. Using this data set a Neural Network (NN) is used for the classification of
iris data set. The EBPA is used for training of this ANN. The results of simulations illustrate the effectiveness of
the neural system in iris class identification.
Web spam classification using supervised artificial neural network algorithmsaciijournal
Due to the rapid growth in technology employed by the spammers, there is a need of classifiers that are more efficient, generic and highly adaptive. Neural Network based technologies have high ability of adaption as well as generalization. As per our knowledge, very little work has been done in this field using neural network. We present this paper to fill this gap. This paper evaluates performance of three supervised learning algorithms of artificial neural network by creating classifiers for the complex problem of latest web spam pattern classification. These algorithms are Conjugate Gradient algorithm, Resilient Backpropagation learning, and Levenberg-Marquardt algorithm.
An Efficient PSO Based Ensemble Classification Model on High Dimensional Data...ijsc
This summary provides the high-level information from the document in 3 sentences:
The document proposes a Particle Swarm Optimization (PSO) based ensemble classification model to improve classification of high-dimensional biomedical datasets. It develops an optimized PSO technique to select optimal features and initialize weights for base classifiers in the ensemble model. Experimental results on microarray datasets show the proposed model achieves higher accuracy, true positive rate, and lower error rate compared to traditional feature selection based classification models.
On the High Dimentional Information Processing in Quaternionic Domain and its...IJAAS Team
There are various high dimensional engineering and scientific applications in communication, control, robotics, computer vision, biometrics, etc.; where researchers are facing problem to design an intelligent and robust neural system which can process higher dimensional information efficiently. The conventional real-valued neural networks are tried to solve the problem associated with high dimensional parameters, but the required network structure possesses high complexity and are very time consuming and weak to noise. These networks are also not able to learn magnitude and phase values simultaneously in space. The quaternion is the number, which possesses the magnitude in all four directions and phase information is embedded within it. This paper presents a well generalized learning machine with a quaternionic domain neural network that can finely process magnitude and phase information of high dimension data without any hassle. The learning and generalization capability of the proposed learning machine is presented through a wide spectrum of simulations which demonstrate the significance of the work.
This document introduces graph attention networks (GATs) for node classification of graph-structured data. GATs use self-attention mechanisms over a node's neighbors to compute hidden representations. The proposed approach achieves state-of-the-art results on four benchmarks, demonstrating the potential of attention models on graphs. GATs are computationally efficient and do not require upfront knowledge of global graph structure.
Automated-tuned hyper-parameter deep neural network by using arithmetic optim...IJECEIAES
Deep neural networks (DNNs) are very dependent on their parameterization and require experts to determine which method to implement and modify the hyper-parameters value. This study proposes an automated-tuned hyperparameter for DNN using a metaheuristic optimization algorithm, arithmetic optimization algorithm (AOA). AOA makes use of the distribution properties of mathematics’ primary arithmetic operators, including multiplication, division, addition, and subtraction. AOA is mathematically modeled and implemented to optimize processes across a broad range of search spaces. The performance of AOA is evaluated against 29 benchmark functions, and several real-world engineering design problems are to demonstrate AOA’s applicability. The hyper-parameter tuning framework consists of a set of Lorenz chaotic system datasets, hybrid DNN architecture, and AOA that works automatically. As a result, AOA produced the highest accuracy in the test dataset with a combination of optimized hyper-parameters for DNN architecture. The boxplot analysis also produced the ten AOA particles that are the most accurately chosen. Hence, AOA with ten particles had the smallest size of boxplot for all hyper-parameters, which concluded the best solution. In particular, the result for the proposed system is outperformed compared to the architecture tested with particle swarm optimization.
Neural Network Based Individual Classification SystemIRJET Journal
This document describes a neural network model developed for individual classification. The model was designed to measure personality traits through a questionnaire. It then uses a neural network trained on sample data through unsupervised and supervised learning with multi-layer perceptions. The backpropagation algorithm was used to train the network. The neural network architecture included multiple neuron layers trained on a 200 item data set, achieving 99.82% accuracy. The goal was to classify individuals into high, middle, and low personality categories for use in job selection or training.
Comparison of Neural Network Training Functions for Hematoma Classification i...IOSR Journals
Classification is one of the most important task in application areas of artificial neural networks
(ANN).Training neural networks is a complex task in the supervised learning field of research. The main
difficulty in adopting ANN is to find the most appropriate combination of learning, transfer and training
function for the classification task. We compared the performances of three types of training algorithms in feed
forward neural network for brain hematoma classification. In this work we have selected Gradient Descent
based backpropagation, Gradient Descent with momentum, Resilence backpropogation algorithms. Under
conjugate based algorithms, Scaled Conjugate back propagation, Conjugate Gradient backpropagation with
Polak-Riebreupdates(CGP) and Conjugate Gradient backpropagation with Fletcher-Reeves updates (CGF).The
last category is Quasi Newton based algorithm, under this BFGS, Levenberg-Marquardt algorithms are
selected. Proposed work compared training algorithm on the basis of mean square error, accuracy, rate of
convergence and correctness of the classification. Our conclusion about the training functions is based on the
simulation results
Artificial Intelligence Applications in Petroleum Engineering - Part IRamez Abdalla, M.Sc
This document discusses applications of artificial intelligence, specifically artificial neural networks and genetic algorithms, in petroleum engineering. It provides an overview of neural networks in OnePetro papers, describes the basic concepts and training processes of neural networks and genetic algorithms. It then discusses various applications of these techniques in reservoir engineering, production technologies, and oil well drilling, including reservoir characterization, modeling, well test analysis, permeability prediction, production monitoring, drilling optimization, and more. The presentation aims to explore these applications in more depth.
Web Spam Classification Using Supervised Artificial Neural Network Algorithmsaciijournal
Due to the rapid growth in technology employed by the spammers, there is a need of classifiers that are
more efficient, generic and highly adaptive. Neural Network based technologies have high ability of
adaption as well as generalization. As per our knowledge, very little work has been done in this field using
neural network. We present this paper to fill this gap. This paper evaluates performance of three supervised
learning algorithms of artificial neural network by creating classifiers for the complex problem of latest
web spam pattern classification. These algorithms are Conjugate Gradient algorithm, Resilient Backpropagation learning, and Levenberg-Marquardt algorithm.
Understanding Inductive Bias in Machine LearningSUTEJAS
This presentation explores the concept of inductive bias in machine learning. It explains how algorithms come with built-in assumptions and preferences that guide the learning process. You'll learn about the different types of inductive bias and how they can impact the performance and generalizability of machine learning models.
The presentation also covers the positive and negative aspects of inductive bias, along with strategies for mitigating potential drawbacks. We'll explore examples of how bias manifests in algorithms like neural networks and decision trees.
By understanding inductive bias, you can gain valuable insights into how machine learning models work and make informed decisions when building and deploying them.
Harnessing WebAssembly for Real-time Stateless Streaming PipelinesChristina Lin
Traditionally, dealing with real-time data pipelines has involved significant overhead, even for straightforward tasks like data transformation or masking. However, in this talk, we’ll venture into the dynamic realm of WebAssembly (WASM) and discover how it can revolutionize the creation of stateless streaming pipelines within a Kafka (Redpanda) broker. These pipelines are adept at managing low-latency, high-data-volume scenarios.
Batteries -Introduction – Types of Batteries – discharging and charging of battery - characteristics of battery –battery rating- various tests on battery- – Primary battery: silver button cell- Secondary battery :Ni-Cd battery-modern battery: lithium ion battery-maintenance of batteries-choices of batteries for electric vehicle applications.
Fuel Cells: Introduction- importance and classification of fuel cells - description, principle, components, applications of fuel cells: H2-O2 fuel cell, alkaline fuel cell, molten carbonate fuel cell and direct methanol fuel cells.
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
Introduction- e - waste – definition - sources of e-waste– hazardous substances in e-waste - effects of e-waste on environment and human health- need for e-waste management– e-waste handling rules - waste minimization techniques for managing e-waste – recycling of e-waste - disposal treatment methods of e- waste – mechanism of extraction of precious metal from leaching solution-global Scenario of E-waste – E-waste in India- case studies.
A review on techniques and modelling methodologies used for checking electrom...nooriasukmaningtyas
The proper function of the integrated circuit (IC) in an inhibiting electromagnetic environment has always been a serious concern throughout the decades of revolution in the world of electronics, from disjunct devices to today’s integrated circuit technology, where billions of transistors are combined on a single chip. The automotive industry and smart vehicles in particular, are confronting design issues such as being prone to electromagnetic interference (EMI). Electronic control devices calculate incorrect outputs because of EMI and sensors give misleading values which can prove fatal in case of automotives. In this paper, the authors have non exhaustively tried to review research work concerned with the investigation of EMI in ICs and prediction of this EMI using various modelling methodologies and measurement setups.
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressionsVictor Morales
K8sGPT is a tool that analyzes and diagnoses Kubernetes clusters. This presentation was used to share the requirements and dependencies to deploy K8sGPT in a local environment.
Optimizing Gradle Builds - Gradle DPE Tour Berlin 2024Sinan KOZAK
Sinan from the Delivery Hero mobile infrastructure engineering team shares a deep dive into performance acceleration with Gradle build cache optimizations. Sinan shares their journey into solving complex build-cache problems that affect Gradle builds. By understanding the challenges and solutions found in our journey, we aim to demonstrate the possibilities for faster builds. The case study reveals how overlapping outputs and cache misconfigurations led to significant increases in build times, especially as the project scaled up with numerous modules using Paparazzi tests. The journey from diagnosing to defeating cache issues offers invaluable lessons on maintaining cache integrity without sacrificing functionality.
Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapte...University of Maribor
Slides from talk presenting:
Aleš Zamuda: Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapter and Networking.
Presentation at IcETRAN 2024 session:
"Inter-Society Networking Panel GRSS/MTT-S/CIS
Panel Session: Promoting Connection and Cooperation"
IEEE Slovenia GRSS
IEEE Serbia and Montenegro MTT-S
IEEE Slovenia CIS
11TH INTERNATIONAL CONFERENCE ON ELECTRICAL, ELECTRONIC AND COMPUTING ENGINEERING
3-6 June 2024, Niš, Serbia
1. Colon Tumor Classification using
various Neural Network Models
coupled with Multi-Objective
Evolutionary Optimization Schemes
Anirudh Munnangi
Chandrasekar Venkatesh
Ahmed Sageer Cheriya Melat
2. This project is an implementation of Neural Networks, with
hybrid evolutionary algorithm optimizing multiple
objectives for classification.
Optimizing three objectives: Pareto non dominated sorting
genetic algorithm based optimization is done on norm of
the weights, mean norm square errors and complexity of
the network.
The evolutionary algorithm is applied to Radial Basis
Function Networks (RBFNs) and on Multi-Layer
Perceptron networks (MLPs), these algorithms are applied
to classify real world two class colon tumor data.
Abstract
3. Data is real world two class colon tumor data
Data set consist of 62 data points
Data points have feature space of 2000
By principal component analysis method, the dimension of
data points is reduced to 47 feature space
Data processing
4. Important Concepts
• Radial Basis Function Networks
• Multi Layer Perceptron
• Pareto Optimality
• Genetic Algorithm
6. Radial Basis Function Networks
The following equation defines the process which is followed by the RBFN map.
𝜑𝑗 𝑥 = exp
− | 𝑥 − 𝑐𝑗 |
2𝜎𝑗
2
2
𝑦 =
𝑗=1
𝑃
𝑤𝑗 𝜑𝑗 𝑥
𝑌2
𝑌1
1
1
W20
W21
W22
Wm2
Wm1
W12
W10W11
∑
∑
X1
Xn
Input: 47
dimensional points
Output: 2 bits
representing class
Number of hidden layers: 1
Number of neurons in each layer: 20 or more
12. Genetic Algorithm
• Population size = 20
• Each member represents Mean norm square error, complexity
of the network and norm of weights
• For each iteration
• Perform Non dominated sort (Pareto optimization)
• Choose fittest, crossover and form child population
• Combine parent and child population, perform non-
dominated sort
• Form new population
14. 2.6 2.8 3 3.2 3.4 3.6 3.8 4 4.2 4.4 4.6
0
10
20
30
40
50
60
f3=||w||
f1=MeannormSquareError
Convergence of Pareto Front
NSGA using RBFN
Optimized RBFN with the best
trade-off between objectives
15. 0 5 10 15 20 25 30 35 40 45
2
2.5
3
3.5
4
4.5
5
5.5
6
6.5
7
Convergence of Pareto Front
Objective function f2
Objectivefunctionf1
RBFN with NSGA
f1=MSE f2=Active hidden kernels
f1=MSE f2=||w||
f1=Active hidden kernels f2=||w||
Performance in different runs
16. 0 10 20 30 40 50 60 70
2
3
4
5
6
7
8
9
10
Convergence of Pareto Front
f3=||w||
MeanNormSquareError
RBFN NSGA
MLP NSGA
Comparative analysis of the
performance of RBFN Vs MLP
17. INFERENCE
• For the data set we used, optimized RBFN model seems to perform
better than MLP.
• Pareto optimization for multiple objectives works well with will all
objectives being achieved.
FURTHER WORK
• Can try more objectives, different parameters to optimize
• Try optimizing other network models apart from RBFN and MLP, maybe
try optimizing for a combination of networks.
18. REFERENCES
1. Sultan Noman Qasem, Siti Mariyam Shamsuddin, Azlan Mohd Zain “Multi-objective hybrid evolutionary algorithms for radial basis function
neural network design” Knowledge-Based Systems 27 (2012) 475–497 25 November 2011
2. Sultan Noman Qasema, Siti Mariyam Shamsuddina, “Memetic Elitist Pareto Differential Evolution algorithm based Radial basis function
networks for classification problems” Neurocomputing; Applied Soft Computing 11 (2011) 5565–5581- 6 May 2011
3. Aimin Zhou, Bo-Yang Qu, Hui Li, Shi Zheng Zhao, Ponnuthurai Nagaratnam Suganthan, Qingfu Zhang, “Multiobjective evolutionary algorithms:
A survey of the state of the art” Swarm and Evolutionary Computation 1 (2011) 32–49; 16 March 2011
4. Illya Kokshenev, Antonio Padua Braga, “An efficient multi-objective learning algorithm for RBF neural network. Neurocomputing
73(2010)2799–2808, 22August2010
5. Sultan Noman Qasem and Siti Mariyam Shamsuddin, Radial Basis Function Network based on time variant multiobjective particle swarm
optimization for medical disease analysis.
6. Jonathan E. Fieldsend and Sameer Singh, “Pareto Evolutionary Neural Networks” IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL.
16, NO. 2, MARCH 2005
7. Kalyanmoy Deb, Amrit Pratap, Sameer Agarwal, and T. Meyarivan, “A Fast and Elitist Multiobjective Genetic Algorithm- NSGA-II” IEEE
TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 6, NO. 2, APRIL 2002
8. David Lahoz and Pedro Mateo, “Neural Network Ensembles for Classification Problems using Multiobjective Genetic Algorithms”
9. U. Alon, et al. "Broad Patterns of Gene Expression Revealed by Clustering Analysis of Tumor and Normal Colon Tissues Probed by
Oligonucleotide Arrays", PNAS, 96:6745-6750, 1999