Radial basis function network ppt bySheetal,Samreen and Dhanashrisheetal katkar
Radial Basis Functions are nonlinear activation functions used by artificial neural networks.Explained commonly used RBFs ,cover's theorem,interpolation problem and learning strategies.
The document is a report on implementing and testing a radial basis function neural network for clustering iris flower data. It introduces RBF networks and the methodology used, which involved locating RBF nodes as cluster centers, calculating Gaussian functions, training the RBF layer unsupervised and a perceptron layer supervised. Results show the network accurately clustered most iris flowers into the three expected categories when trained on the iris data set.
RADIAL BASIS FUNCTION PROCESS NEURAL NETWORK TRAINING BASED ON GENERALIZED FR...cseij
For learning problem of Radial Basis Function Process Neural Network (RBF-PNN), an optimization
training method based on GA combined with SA is proposed in this paper. Through building generalized
Fréchet distance to measure similarity between time-varying function samples, the learning problem of
radial basis centre functions and connection weights is converted into the training on corresponding
discrete sequence coefficients. Network training objective function is constructed according to the least
square error criterion, and global optimization solving of network parameters is implemented in feasible
solution space by use of global optimization feature of GA and probabilistic jumping property of SA . The
experiment results illustrate that the training algorithm improves the network training efficiency and
stability.
PSO-based Training, Pruning, and Ensembling of Extreme Learning Machine RBF N...ijceronline
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
Introduction to Radial Basis Function NetworksESCOM
This document provides an introduction to radial basis function (RBF) networks, a type of artificial neural network used for supervised learning problems. It describes how RBF networks are a type of linear model that uses radial basis functions as activation functions for hidden units. While RBF networks are nonlinear, the document emphasizes keeping the underlying mathematics and computations linear to simplify the problem and reduce computational costs compared to other neural network techniques that rely on nonlinear optimization algorithms. It reviews key concepts for RBF networks like least squares optimization, model selection, ridge regression, and forward selection techniques for building networks from data.
Deep Learning Fast MRI Using Channel Attention in Magnitude DomainJoonhyung Lee
My presentation on how we participated in the fastMRI Challanege in 2019.
Aside from theoretical considerations, it also explains key implementation issues that arise in all deep learning for MRI such as disk I/O and CPU/GPU load balancing.
Used for presentation at ISBI 2020 Oral session.
Accidentally wrote the title as "Deep Learning Sum-of-Squares Images in Accelerated Parallel MRI". Sorry for the mistake!
[PR-325] Pixel-BERT: Aligning Image Pixels with Text by Deep Multi-Modal Tran...Sunghoon Joo
PR-325: Pixel-BERT: Aligning Image Pixels with Text by Deep Multi-Modal Transformers
paper link: https://arxiv.org/abs/2004.00849
youtube link: https://youtu.be/Kgh88DLHHTo
Radial basis function network ppt bySheetal,Samreen and Dhanashrisheetal katkar
Radial Basis Functions are nonlinear activation functions used by artificial neural networks.Explained commonly used RBFs ,cover's theorem,interpolation problem and learning strategies.
The document is a report on implementing and testing a radial basis function neural network for clustering iris flower data. It introduces RBF networks and the methodology used, which involved locating RBF nodes as cluster centers, calculating Gaussian functions, training the RBF layer unsupervised and a perceptron layer supervised. Results show the network accurately clustered most iris flowers into the three expected categories when trained on the iris data set.
RADIAL BASIS FUNCTION PROCESS NEURAL NETWORK TRAINING BASED ON GENERALIZED FR...cseij
For learning problem of Radial Basis Function Process Neural Network (RBF-PNN), an optimization
training method based on GA combined with SA is proposed in this paper. Through building generalized
Fréchet distance to measure similarity between time-varying function samples, the learning problem of
radial basis centre functions and connection weights is converted into the training on corresponding
discrete sequence coefficients. Network training objective function is constructed according to the least
square error criterion, and global optimization solving of network parameters is implemented in feasible
solution space by use of global optimization feature of GA and probabilistic jumping property of SA . The
experiment results illustrate that the training algorithm improves the network training efficiency and
stability.
PSO-based Training, Pruning, and Ensembling of Extreme Learning Machine RBF N...ijceronline
International Journal of Computational Engineering Research (IJCER) is dedicated to protecting personal information and will make every reasonable effort to handle collected information appropriately. All information collected, as well as related requests, will be handled as carefully and efficiently as possible in accordance with IJCER standards for integrity and objectivity.
Introduction to Radial Basis Function NetworksESCOM
This document provides an introduction to radial basis function (RBF) networks, a type of artificial neural network used for supervised learning problems. It describes how RBF networks are a type of linear model that uses radial basis functions as activation functions for hidden units. While RBF networks are nonlinear, the document emphasizes keeping the underlying mathematics and computations linear to simplify the problem and reduce computational costs compared to other neural network techniques that rely on nonlinear optimization algorithms. It reviews key concepts for RBF networks like least squares optimization, model selection, ridge regression, and forward selection techniques for building networks from data.
Deep Learning Fast MRI Using Channel Attention in Magnitude DomainJoonhyung Lee
My presentation on how we participated in the fastMRI Challanege in 2019.
Aside from theoretical considerations, it also explains key implementation issues that arise in all deep learning for MRI such as disk I/O and CPU/GPU load balancing.
Used for presentation at ISBI 2020 Oral session.
Accidentally wrote the title as "Deep Learning Sum-of-Squares Images in Accelerated Parallel MRI". Sorry for the mistake!
[PR-325] Pixel-BERT: Aligning Image Pixels with Text by Deep Multi-Modal Tran...Sunghoon Joo
PR-325: Pixel-BERT: Aligning Image Pixels with Text by Deep Multi-Modal Transformers
paper link: https://arxiv.org/abs/2004.00849
youtube link: https://youtu.be/Kgh88DLHHTo
Web spam classification using supervised artificial neural network algorithmsaciijournal
Due to the rapid growth in technology employed by the spammers, there is a need of classifiers that are more efficient, generic and highly adaptive. Neural Network based technologies have high ability of adaption as well as generalization. As per our knowledge, very little work has been done in this field using neural network. We present this paper to fill this gap. This paper evaluates performance of three supervised learning algorithms of artificial neural network by creating classifiers for the complex problem of latest web spam pattern classification. These algorithms are Conjugate Gradient algorithm, Resilient Backpropagation learning, and Levenberg-Marquardt algorithm.
Efficient design of feedforward network for pattern classificationIOSR Journals
This document compares the performance of radial basis function (RBF) networks and multi-layer perceptron (MLP) networks for pattern classification tasks. It analyzes the training time of RBF and MLP networks on two datasets: a below poverty line (BPL) dataset with 293 samples and 13 features, and a breast cancer dataset with 699 samples and 9 features. For both datasets, RBF networks trained significantly faster than MLP networks using the same number of hidden neurons, without affecting classification performance. The document concludes that RBF networks perform training faster than MLP networks for these pattern classification problems.
ARTIFICIAL NEURAL NETWORK APPROACH TO MODELING OF POLYPROPYLENE REACTORijac123
This paper shows modeling of highly nonlinear polymerization process using the artificial neural network approach for the model predictive purposes. Polymerization occurs in a fluidized bed polypropylene reactor using Ziegler - Natta catalyst and the main objective was modeling of the reactor production rate.
The data set used for an identification of the model is a real process data received from an existing polypropylene plant and the identified model is a nonlinear autoregressive neural network with the exogenous input. Performance of a trained network has been verified using the real process data and the
ability of the production rate prediction is shown in the conclusion.
This document discusses various types of neural networks including feedback neural networks, self-organizing feature maps, and Hopfield networks. It provides details on Hopfield networks such as their architecture, training and testing algorithms. It also discusses issues like false minima problem in neural networks and techniques to address it like simulated annealing and stochastic update. Furthermore, it covers associative memory models like bidirectional associative memory and self-organizing maps.
The document presents a new batch training algorithm called Multiple Optimal Learning Factors (MOLF) for training a multi-layer perceptron neural network. MOLF uses a two-stage approach per iteration: 1) Newton's method is used to find a vector of optimal learning factors, one for each hidden unit, which is used to update the input weights. 2) Linear equations are solved to update the output weights. The algorithm analyzes how linear dependencies among inputs and hidden units can impact training, and develops an improved version of MOLF that is not affected by these dependencies. In experiments, the improved MOLF performs better than first-order methods with minimal overhead, and almost as well as second-order
This document discusses neural networks and multilayer feedforward neural network architectures. It describes how multilayer networks can solve nonlinear classification problems using hidden layers. The backpropagation algorithm is introduced as a way to train these networks by propagating error backwards from the output to adjust weights. The architecture of a neural network is explained, including input, hidden, and output nodes. Backpropagation is then described in more detail through its training process of forward passing input, calculating error at the output, and propagating this error backwards to update weights. Examples of backpropagation and its applications are also provided.
This document discusses classification using backpropagation in deep neural networks. It provides an overview of key concepts like perceptrons, multi-layer perceptrons, training MLPs, neural networks as classifiers, backpropagation, backpropagation with 3 hidden layers, gradient descent backpropagation, and challenges in deep neural network training. The document is authored by Bineesh Jose, a research scholar at the School of Computer Science, M G University in Kottayam.
The document describes the backpropagation algorithm, which is commonly used to train artificial neural networks. It calculates the gradient of a loss function with respect to the network's weights in order to minimize the loss during training. The backpropagation process involves propagating inputs forward and calculating errors backward to update weights. It has advantages like being fast, simple, and not requiring parameter tuning. However, it can be sensitive to noisy data and outliers. Applications of backpropagation include speech recognition, character recognition, and face recognition.
Mlp mixer image_process_210613 deeplearning paper review!taeseon ryu
안녕하세요 딥러닝논문읽기모임 입니다!
오늘 소개드릴 논문은 MLP-Mixer라는 제목의 논문입니다.
해당 논문은 아직 아카이브에만 올라와 있고 구글 브레인팀에서 발표한 논문입니다.
CNN은 컴퓨터 비전에서 널리 사용하고 있는 레이어지만, 최근에는 Transformer와 같은 네트워크도 비전영역에 들어오기 시작하고, 몇몇 분야에서는 SOTA를 달성하기도 했습니다. 해당 논문은 Multi layer perceptron만을 사용하여 최신 논문들과 경쟁력이 있는 결과를 달성하는대 성공하였습니다.
논문에 디테일한 설명을 이미지처리팀 허다운님이 자세한 리뷰를 도와주셨습니다! 오늘도 많은 관심 미리 감사드립니다!
Lecture for Neural Networks study group held on February 8, 2020.
Reference book: http://hagan.okstate.edu/nnd.html
Video: https://youtu.be/TyyoPU13ME0
Python demo codes: https://bit.ly/3893GHB
Initiated by Taiwan AI Group (https://www.facebook.com/groups/Taiwan.AI.Group/permalink/2017771298545301/)
Classification by back propagation, multi layered feed forward neural network...bihira aggrey
Classification by Back Propagation, Multi-layered feed forward Neural Networks - Provides a basic introduction of classification in data mining with neural networks
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Road Network Extraction using Satellite Imagery.SUMITRAJ312049
This is my Internship project ppt on Road Network Extraction Using Satellite Imagery.
In this project, A robust and efficient method for the extraction of roads from a given set of satellite images is explained.
In this work, we implement the U-Net segmentation architecture on the Mnih et. al.Massachusetts Roads Dataset for the task of road network extraction.
Large Scale Kernel Learning using Block Coordinate DescentShaleen Kumar Gupta
This paper explores using block coordinate descent to scale kernel learning methods to large datasets. It compares exact kernel methods to two approximation techniques, Nystrom and random Fourier features, on speech, text, and image datasets. Experimental results show that Nystrom generally achieves better accuracy than random features but requires more iterations. The paper also analyzes the performance and scalability of computing kernel blocks in a distributed setting.
Convolutional neural networks (CNNs) learn multi-level features and perform classification jointly and better than traditional approaches for image classification and segmentation problems. CNNs have four main components: convolution, nonlinearity, pooling, and fully connected layers. Convolution extracts features from the input image using filters. Nonlinearity introduces nonlinearity. Pooling reduces dimensionality while retaining important information. The fully connected layer uses high-level features for classification. CNNs are trained end-to-end using backpropagation to minimize output errors by updating weights.
PR-270: PP-YOLO: An Effective and Efficient Implementation of Object DetectorJinwon Lee
TensorFlow Korea 논문읽기모임 PR12 270번째 논문 review입니다.
이번 논문은 Baidu에서 나온 PP-YOLO: An Effective and Efficient Implementation of Object Detector입니다. YOLOv3에 다양한 방법을 적용하여 매우 높은 성능과 함께 매우 빠른 속도 두마리 토끼를 다 잡아버린(?) 그런 논문입니다. 논문에서 사용한 다양한 trick들에 대해서 좀 더 깊이있게 살펴보았습니다. Object detection에 사용된 기법 들 중에 Deformable convolution, Exponential Moving Average, DropBlock, IoU aware prediction, Grid sensitivity elimination, MatrixNMS, CoordConv, 등의 방법에 관심이 있으시거나 알고 싶으신 분들은 영상과 발표자료를 참고하시면 좋을 것 같습니다!
논문링크: https://arxiv.org/abs/2007.12099
영상링크: https://youtu.be/7v34cCE5H4k
Lecture for Neural Networks study group held on January 11, 2020.
Reference book: http://hagan.okstate.edu/nnd.html
Video: https://youtu.be/H4NKgliTFUw
Initiated by Taiwan AI Group (https://www.facebook.com/groups/Taiwan.AI.Group/permalink/2017771298545301/)
The document discusses a Bayesian approach called localized multi-kernel relevance vector machine (LMK-RVM) that uses multiple kernel functions to perform classification. LMK-RVM allows different kernel functions or parameters to be used in different areas of feature space, providing more flexibility than single-kernel models. It combines multi-kernel learning with the sparsity of the relevance vector machine (RVM) model. The document outlines LMK-RVM and provides examples showing it can improve classification accuracy and potentially provide sparser models compared to single-kernel approaches.
Optimum capacity allocation of distributed generation units using parallel ps...eSAT Journals
Abstract This paper proposes the application of Parallel Particle Swarm Optimization (PPSO) technique to find the optimal sizing of multiple DG(Distributed Generation) units in the radial distribution network by reduction in real power losses and enhancement in voltage profile. Message passing interface (MPI) is used for the parallelization of PSO. The initial population of PSO algorithm has been divided between the processors at run time. The proposed technique is tested on standard 123-bus test system and the obtained results show that the simulation time is significantly reduced and is concluded that parallelization helps in enhancing the performance of basic PSO. The procedure has been implemented in an environment in which OpenDSS (Open Distribution System Simulator) is driven from MATLAB. An adaptive weight particle swarm optimization algorithm has been developed in MATLAB , parallelization is achieved using MATLABMPI and the unbalanced three-phase distribution load flow (DLF) has been performed using Electric Power Research Institute’s (EPRI) open source tool OpenDSS. Index Terms: Distributed Generation, Message Passing Interface, Optimal Placement, Parallel Particle Swarm Optimisation
A trade union is defined as a continuous association of wage-earners formed to maintain or improve working conditions through collective bargaining. There are several key characteristics of trade unions including registration with the registrar of trade unions, independence from employers, and affiliation with central trade union organizations. Trade unions serve both militant functions like negotiating wages and working conditions as well as fraternal functions like providing welfare benefits and education. They operate at both national and industry levels and can take various forms based on membership. Democratic participation and control by members is important for a trade union's effectiveness.
A simplified project about Industrial Disputes as per the Industrial Disputes Act, 1947.
Also comprising of real cases of Strikes, Lockouts, Gherao.
This project also talks about the Trade Union Act, 1926.
Web spam classification using supervised artificial neural network algorithmsaciijournal
Due to the rapid growth in technology employed by the spammers, there is a need of classifiers that are more efficient, generic and highly adaptive. Neural Network based technologies have high ability of adaption as well as generalization. As per our knowledge, very little work has been done in this field using neural network. We present this paper to fill this gap. This paper evaluates performance of three supervised learning algorithms of artificial neural network by creating classifiers for the complex problem of latest web spam pattern classification. These algorithms are Conjugate Gradient algorithm, Resilient Backpropagation learning, and Levenberg-Marquardt algorithm.
Efficient design of feedforward network for pattern classificationIOSR Journals
This document compares the performance of radial basis function (RBF) networks and multi-layer perceptron (MLP) networks for pattern classification tasks. It analyzes the training time of RBF and MLP networks on two datasets: a below poverty line (BPL) dataset with 293 samples and 13 features, and a breast cancer dataset with 699 samples and 9 features. For both datasets, RBF networks trained significantly faster than MLP networks using the same number of hidden neurons, without affecting classification performance. The document concludes that RBF networks perform training faster than MLP networks for these pattern classification problems.
ARTIFICIAL NEURAL NETWORK APPROACH TO MODELING OF POLYPROPYLENE REACTORijac123
This paper shows modeling of highly nonlinear polymerization process using the artificial neural network approach for the model predictive purposes. Polymerization occurs in a fluidized bed polypropylene reactor using Ziegler - Natta catalyst and the main objective was modeling of the reactor production rate.
The data set used for an identification of the model is a real process data received from an existing polypropylene plant and the identified model is a nonlinear autoregressive neural network with the exogenous input. Performance of a trained network has been verified using the real process data and the
ability of the production rate prediction is shown in the conclusion.
This document discusses various types of neural networks including feedback neural networks, self-organizing feature maps, and Hopfield networks. It provides details on Hopfield networks such as their architecture, training and testing algorithms. It also discusses issues like false minima problem in neural networks and techniques to address it like simulated annealing and stochastic update. Furthermore, it covers associative memory models like bidirectional associative memory and self-organizing maps.
The document presents a new batch training algorithm called Multiple Optimal Learning Factors (MOLF) for training a multi-layer perceptron neural network. MOLF uses a two-stage approach per iteration: 1) Newton's method is used to find a vector of optimal learning factors, one for each hidden unit, which is used to update the input weights. 2) Linear equations are solved to update the output weights. The algorithm analyzes how linear dependencies among inputs and hidden units can impact training, and develops an improved version of MOLF that is not affected by these dependencies. In experiments, the improved MOLF performs better than first-order methods with minimal overhead, and almost as well as second-order
This document discusses neural networks and multilayer feedforward neural network architectures. It describes how multilayer networks can solve nonlinear classification problems using hidden layers. The backpropagation algorithm is introduced as a way to train these networks by propagating error backwards from the output to adjust weights. The architecture of a neural network is explained, including input, hidden, and output nodes. Backpropagation is then described in more detail through its training process of forward passing input, calculating error at the output, and propagating this error backwards to update weights. Examples of backpropagation and its applications are also provided.
This document discusses classification using backpropagation in deep neural networks. It provides an overview of key concepts like perceptrons, multi-layer perceptrons, training MLPs, neural networks as classifiers, backpropagation, backpropagation with 3 hidden layers, gradient descent backpropagation, and challenges in deep neural network training. The document is authored by Bineesh Jose, a research scholar at the School of Computer Science, M G University in Kottayam.
The document describes the backpropagation algorithm, which is commonly used to train artificial neural networks. It calculates the gradient of a loss function with respect to the network's weights in order to minimize the loss during training. The backpropagation process involves propagating inputs forward and calculating errors backward to update weights. It has advantages like being fast, simple, and not requiring parameter tuning. However, it can be sensitive to noisy data and outliers. Applications of backpropagation include speech recognition, character recognition, and face recognition.
Mlp mixer image_process_210613 deeplearning paper review!taeseon ryu
안녕하세요 딥러닝논문읽기모임 입니다!
오늘 소개드릴 논문은 MLP-Mixer라는 제목의 논문입니다.
해당 논문은 아직 아카이브에만 올라와 있고 구글 브레인팀에서 발표한 논문입니다.
CNN은 컴퓨터 비전에서 널리 사용하고 있는 레이어지만, 최근에는 Transformer와 같은 네트워크도 비전영역에 들어오기 시작하고, 몇몇 분야에서는 SOTA를 달성하기도 했습니다. 해당 논문은 Multi layer perceptron만을 사용하여 최신 논문들과 경쟁력이 있는 결과를 달성하는대 성공하였습니다.
논문에 디테일한 설명을 이미지처리팀 허다운님이 자세한 리뷰를 도와주셨습니다! 오늘도 많은 관심 미리 감사드립니다!
Lecture for Neural Networks study group held on February 8, 2020.
Reference book: http://hagan.okstate.edu/nnd.html
Video: https://youtu.be/TyyoPU13ME0
Python demo codes: https://bit.ly/3893GHB
Initiated by Taiwan AI Group (https://www.facebook.com/groups/Taiwan.AI.Group/permalink/2017771298545301/)
Classification by back propagation, multi layered feed forward neural network...bihira aggrey
Classification by Back Propagation, Multi-layered feed forward Neural Networks - Provides a basic introduction of classification in data mining with neural networks
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Road Network Extraction using Satellite Imagery.SUMITRAJ312049
This is my Internship project ppt on Road Network Extraction Using Satellite Imagery.
In this project, A robust and efficient method for the extraction of roads from a given set of satellite images is explained.
In this work, we implement the U-Net segmentation architecture on the Mnih et. al.Massachusetts Roads Dataset for the task of road network extraction.
Large Scale Kernel Learning using Block Coordinate DescentShaleen Kumar Gupta
This paper explores using block coordinate descent to scale kernel learning methods to large datasets. It compares exact kernel methods to two approximation techniques, Nystrom and random Fourier features, on speech, text, and image datasets. Experimental results show that Nystrom generally achieves better accuracy than random features but requires more iterations. The paper also analyzes the performance and scalability of computing kernel blocks in a distributed setting.
Convolutional neural networks (CNNs) learn multi-level features and perform classification jointly and better than traditional approaches for image classification and segmentation problems. CNNs have four main components: convolution, nonlinearity, pooling, and fully connected layers. Convolution extracts features from the input image using filters. Nonlinearity introduces nonlinearity. Pooling reduces dimensionality while retaining important information. The fully connected layer uses high-level features for classification. CNNs are trained end-to-end using backpropagation to minimize output errors by updating weights.
PR-270: PP-YOLO: An Effective and Efficient Implementation of Object DetectorJinwon Lee
TensorFlow Korea 논문읽기모임 PR12 270번째 논문 review입니다.
이번 논문은 Baidu에서 나온 PP-YOLO: An Effective and Efficient Implementation of Object Detector입니다. YOLOv3에 다양한 방법을 적용하여 매우 높은 성능과 함께 매우 빠른 속도 두마리 토끼를 다 잡아버린(?) 그런 논문입니다. 논문에서 사용한 다양한 trick들에 대해서 좀 더 깊이있게 살펴보았습니다. Object detection에 사용된 기법 들 중에 Deformable convolution, Exponential Moving Average, DropBlock, IoU aware prediction, Grid sensitivity elimination, MatrixNMS, CoordConv, 등의 방법에 관심이 있으시거나 알고 싶으신 분들은 영상과 발표자료를 참고하시면 좋을 것 같습니다!
논문링크: https://arxiv.org/abs/2007.12099
영상링크: https://youtu.be/7v34cCE5H4k
Lecture for Neural Networks study group held on January 11, 2020.
Reference book: http://hagan.okstate.edu/nnd.html
Video: https://youtu.be/H4NKgliTFUw
Initiated by Taiwan AI Group (https://www.facebook.com/groups/Taiwan.AI.Group/permalink/2017771298545301/)
The document discusses a Bayesian approach called localized multi-kernel relevance vector machine (LMK-RVM) that uses multiple kernel functions to perform classification. LMK-RVM allows different kernel functions or parameters to be used in different areas of feature space, providing more flexibility than single-kernel models. It combines multi-kernel learning with the sparsity of the relevance vector machine (RVM) model. The document outlines LMK-RVM and provides examples showing it can improve classification accuracy and potentially provide sparser models compared to single-kernel approaches.
Optimum capacity allocation of distributed generation units using parallel ps...eSAT Journals
Abstract This paper proposes the application of Parallel Particle Swarm Optimization (PPSO) technique to find the optimal sizing of multiple DG(Distributed Generation) units in the radial distribution network by reduction in real power losses and enhancement in voltage profile. Message passing interface (MPI) is used for the parallelization of PSO. The initial population of PSO algorithm has been divided between the processors at run time. The proposed technique is tested on standard 123-bus test system and the obtained results show that the simulation time is significantly reduced and is concluded that parallelization helps in enhancing the performance of basic PSO. The procedure has been implemented in an environment in which OpenDSS (Open Distribution System Simulator) is driven from MATLAB. An adaptive weight particle swarm optimization algorithm has been developed in MATLAB , parallelization is achieved using MATLABMPI and the unbalanced three-phase distribution load flow (DLF) has been performed using Electric Power Research Institute’s (EPRI) open source tool OpenDSS. Index Terms: Distributed Generation, Message Passing Interface, Optimal Placement, Parallel Particle Swarm Optimisation
A trade union is defined as a continuous association of wage-earners formed to maintain or improve working conditions through collective bargaining. There are several key characteristics of trade unions including registration with the registrar of trade unions, independence from employers, and affiliation with central trade union organizations. Trade unions serve both militant functions like negotiating wages and working conditions as well as fraternal functions like providing welfare benefits and education. They operate at both national and industry levels and can take various forms based on membership. Democratic participation and control by members is important for a trade union's effectiveness.
A simplified project about Industrial Disputes as per the Industrial Disputes Act, 1947.
Also comprising of real cases of Strikes, Lockouts, Gherao.
This project also talks about the Trade Union Act, 1926.
The Industrial Disputes Act 1947 aims to provide safeguards to workers and facilitate the investigation and settlement of industrial disputes. It defines an industrial dispute and establishes various authorities such as Works Committees, Conciliation Officers, Boards of Conciliation, Labor Courts, and Tribunals to resolve disputes. The Act prohibits strikes and lock-outs in public utilities without proper notice. It also provides rights to laid-off workers regarding compensation and establishes procedures for retrenchment and notification of changes to service conditions.
An industrial dispute is a conflict between management and workers regarding terms of employment that can result in industrial actions like strikes or lockouts. Disputes generally arise due to issues like poor wages or working conditions. They negatively impact both parties through lost production and profits for management, and lost wages and hardship for workers. Industrial disputes are classified as interest disputes involving negotiations over new terms, or grievance disputes regarding unfair treatment. Common causes of disputes include industrial factors, management attitudes, government failures, and union rivalries. Strikes are a legitimate worker action that temporarily halt work in order to pressure employers, while lockouts are management imposing work stoppages.
The document summarizes key aspects of the Industrial Disputes Act 1947 in India. It defines industrial disputes, outlines the objectives to promote industrial harmony, and describes the types of industrial disputes that can arise. It also explains the authorities and mechanisms established for resolving industrial disputes, including prohibitions on strikes and lockouts, voluntary arbitration, and adjudication through labor courts, tribunals, and national tribunals.
Trade unions in India were formed to protect and promote the interests of workers by representing them, negotiating on their behalf, and giving them a voice in important decisions. Their key functions include collective bargaining for better wages and working conditions, and providing various services to members. While multiple unions can exist in one industry, having too many unions can cause issues, so a single union per industry is often suggested as the ideal solution. Some of the major national trade union organizations in India are AITUC, CITU, HMS, and INTUC.
1) Industrial disputes mainly arise between employers and employees regarding employment issues like wages, hours, terms of employment.
2) Causes of industrial disputes include industrial factors like dismissal or wages; management attitude like unwillingness to negotiate; issues with government machinery; and other factors like political instability.
3) Preventive measures for industrial disputes include appointing welfare officers, establishing tripartite and bipartite bodies for consultation, implementing standing orders to regulate employment conditions, having grievance procedures to address employee issues, and engaging in collective bargaining between unions and management.
Industrial relations encompass employment relationships and interactions between management and employees or among employees. There are various approaches to defining and analyzing industrial relations, including institutional, social psychology, and class-based definitions. Theories also examine factors like human resource management, employment relations, and the objectives and nature of industrial relations. Unions, management, and government all play important roles in industrial relations systems.
The document discusses the definition, objectives, and functions of trade unions. It notes that trade unions are formed to regulate relations between workers and employers, and to impose conditions on businesses. Their key objectives are to improve wages and working conditions for employees through collective bargaining and other means. The functions of trade unions can be categorized as militant (fighting for workers' rights), fraternal (providing welfare benefits), political, and those related to participation in management issues.
DESIGN AND IMPLEMENTATION OF BINARY NEURAL NETWORK LEARNING WITH FUZZY CLUSTE...cscpconf
In this paper, Design and Implementation of Binary Neural Network Learning with Fuzzy
Clustering (DIBNNFC), is proposed to classify semisupervised data, it is based on the
concept of binary neural network and geometrical expansion. Parameters are updated
according to the geometrical location of the training samples in the input space, and each
sample in the training set is learned only once. It’s a semisupervised based approach, the
training samples are semi-labelled i.e. for some samples, labels are known and for some
samples data labels are not known. The method starts with classification, which is done by
using the concept of ETL algorithm. In classification process various classes are formed.
These classes classify samples in to two classes after that considers each class as a region and calculates the average of the entire region separately. This average is centres of the region which is used for the purpose of clustering by using FCM algorithm. Once clustering process over labelling of semi supervised data is done, then whole samples would be classify by (DIBNNFC). The method proposes here is exhaustively tested with different benchmark datasets and it is found that, on increasing value of training parameters number of hidden neurons and training time both are getting decrease. The result reported, using real character recognition data set and result will compare with existing semi-supervised classifier, the proposed approach learned with semi-supervised leads to higher classification accuracy.
This document summarizes research on improving image classification results using neural networks. It compares common image classification methods like support vector machines (SVM) and K-nearest neighbors (KNN). It then evaluates the performance of multilayer perceptron (MLP) neural networks and radial basis function (RBF) neural networks on image classification. The document tests various configurations of MLP and RBF networks on a dataset containing 2310 images across 7 classes. It finds that a MLP network with two hidden layers of 10 neurons each achieves the best results, with an average accuracy of 98.84%. This is significantly higher than the 84.47% average accuracy of RBF networks and outperforms KNN classification as well. The research concludes that neural
Adaptive modified backpropagation algorithm based on differential errorsIJCSEA Journal
A new efficient modified back propagation algorithm with adaptive learning rate is proposed to increase the convergence speed and to minimize the error. The method eliminates initial fixing of learning rate through trial and error and replaces by adaptive learning rate. In each iteration, adaptive learning rate for output and hidden layer are determined by calculating differential linear and nonlinear errors of output layer and hidden layer separately. In this method, each layer has different learning rate in each iteration. The performance of the proposed algorithm is verified by the simulation results.
Comparison of Neural Network Training Functions for Hematoma Classification i...IOSR Journals
Classification is one of the most important task in application areas of artificial neural networks
(ANN).Training neural networks is a complex task in the supervised learning field of research. The main
difficulty in adopting ANN is to find the most appropriate combination of learning, transfer and training
function for the classification task. We compared the performances of three types of training algorithms in feed
forward neural network for brain hematoma classification. In this work we have selected Gradient Descent
based backpropagation, Gradient Descent with momentum, Resilence backpropogation algorithms. Under
conjugate based algorithms, Scaled Conjugate back propagation, Conjugate Gradient backpropagation with
Polak-Riebreupdates(CGP) and Conjugate Gradient backpropagation with Fletcher-Reeves updates (CGF).The
last category is Quasi Newton based algorithm, under this BFGS, Levenberg-Marquardt algorithms are
selected. Proposed work compared training algorithm on the basis of mean square error, accuracy, rate of
convergence and correctness of the classification. Our conclusion about the training functions is based on the
simulation results
The article presents Part of Speech Tagging for Nepali Text using three techniques of Artificial Neural networks. The novel algorithm for POS tagging is introduced .Features are extracted from the marginal probability of Hidden Markov Model. The extracted features are supplied to 3 different ANN architectures viz. Radial Basis Function (RBF) network, General Regression Neural Networks (GRNN) and Feed forward Neural network as an input vector for each word. Two different Annotated Tagged sets are constructed for training and testing purpose. Results are compared using all the 3 techniques and applied on both the sets. GRNN based POS tagging technique is found better as it produces 100% and 98.32% accuracies for both training and testing sets respectively.
Recognition of handwritten digits using rbf neural networkeSAT Journals
Abstract Pattern recognition is required in many fields for different purposes. Methods based on Radial basis function (RBF) neural networks are found to be very successful in pattern classification problems. Training neural network is in general a challenging nonlinear optimization problem. Several algorithms have been proposed for choosing the RBF neural network prototypes and training the network. In this paper RBF neural network using decoupling Kalman filter method is proposed for handwritten digit recognition applications. The efficacy of the proposed method is tested on the handwritten digits of different fonts and found that it is successful in recognizing the digits. Keywords: - Neural network, RBF neural network, Decoupled kalman filter Training, Zoning method
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology.
Investigations on Hybrid Learning in ANFISIJERA Editor
Neural networks have attractiveness to several researchers due to their great closeness to the structure of the brain, their characteristics not shared by many traditional systems. An Artificial Neural Network (ANN) is a network of interconnected artificial processing elements (called neurons) that co-operate with one another in order to solve specific issues. ANNs are inspired by the structure and functional aspects of biological nervous systems. Neural networks, which recognize patterns and adopt themselves to cope with changing environments. Fuzzy inference system incorporates human knowledge and performs inferencing and decision making. The integration of these two complementary approaches together with certain derivative free optimization techniques, results in a novel discipline called Neuro Fuzzy. In Neuro fuzzy development a specific approach is called Adaptive Neuro Fuzzy Inference System (ANFIS), which has shown significant results in modeling nonlinear functions. The basic idea behind the paper is to design a system that uses a fuzzy system to represent knowledge in an interpretable manner and have the learning ability derived from a Runge-Kutta learning method (RKLM) to adjust its membership functions and parameters in order to enhance the system performance. The problem of finding appropriate membership functions and fuzzy rules is often a tiring process of trial and error. It requires users to understand the data before training, which is usually difficult to achieve when the database is relatively large. To overcome these problems, a hybrid of Back Propagation Neural network (BPN) and RKLM can combine the advantages of two systems and avoid their disadvantages.
Classification of Iris Data using Kernel Radial Basis Probabilistic Neural N...Scientific Review SR
This document summarizes a study that evaluated the performance of a kernel radial basis probabilistic neural network (Kernel RBPNN) model for classifying iris data, compared to backpropagation, radial basis function, and radial basis probabilistic neural network models. The Kernel RBPNN model achieved the highest classification accuracy of 89.12% on test data from the iris dataset, performing better than the other models. It also had the fastest training time, being over 80 times faster than the radial basis function model. Analysis of the receiver operating characteristic curves showed that the Kernel RBPNN model had the largest area under the curve, indicating it had the best classification prediction capability out of the four models evaluated.
Classification of Iris Data using Kernel Radial Basis Probabilistic Neural Ne...Scientific Review
Radial Basis Probabilistic Neural Network (RBPNN) has a broader generalized capability that been successfully applied to multiple fields. In this paper, the Euclidean distance of each data point in RBPNN is extended by calculating its kernel-induced distance instead of the conventional sum-of squares distance. The kernel function is a generalization of the distance metric that measures the distance between two data points as the data points are mapped into a high dimensional space. During the comparing of the four constructed classification models with Kernel RBPNN, Radial Basis Function networks, RBPNN and Back-Propagation networks as proposed, results showed that, model classification on Iris Data with Kernel RBPNN display an outstanding performance in this regard.
Web Spam Classification Using Supervised Artificial Neural Network Algorithmsaciijournal
Due to the rapid growth in technology employed by the spammers, there is a need of classifiers that are
more efficient, generic and highly adaptive. Neural Network based technologies have high ability of
adaption as well as generalization. As per our knowledge, very little work has been done in this field using
neural network. We present this paper to fill this gap. This paper evaluates performance of three supervised
learning algorithms of artificial neural network by creating classifiers for the complex problem of latest
web spam pattern classification. These algorithms are Conjugate Gradient algorithm, Resilient Backpropagation learning, and Levenberg-Marquardt algorithm.
Architecture neural network deep optimizing based on self organizing feature ...journalBEEI
Forward neural network (FNN) execution relying on the algorithm of training and architecture selection. Different parameters using for nip out the architecture of FNN such as the connections number among strata, neurons hidden number in each strata hidden and hidden strata number. Feature architectural combinations exponential could be uncontrollable manually so specific architecture can be design automatically by using special algorithm which build system with ability generalization better. Determination of architecture FNN can be done by using the algorithm of optimization numerous. In this paper methodology new proposes achievement where FNN neurons respective with hidden layers estimation work where in this work collect algorithm training self organizing feature map (SOFM) with advantages to explain how the best architectural selected automatically by SOFM from criteria error testing based on architecture populated. Different size of dataset benchmark of 4 classifications tested for approach proposed.
Efficient Forecasting of Exchange rates with Recurrent FLANNIOSR Journals
The document proposes a Functional Link Artificial Recurrent Neural Network (FLARNN) model for forecasting foreign exchange rates between currencies like the US dollar, Indian rupee, British pound, and Japanese yen. It compares the performance of the FLARNN model to existing neural network models like LMS and FLANN. The FLARNN uses functional expansion and recurrent connections to more accurately predict exchange rates up to 60 days in the future based on historical data. Experimental results show the FLARNN model consistently outperforms the other methods according to error convergence and Mean Average Percentage Error.
Application of support vector machines for prediction of anti hiv activity of...Alexander Decker
This document describes a study that used support vector machines (SVM) to develop a quantitative structure-activity relationship (QSAR) model to predict the anti-HIV activity of TIBO derivatives. The SVM model achieved high correlation (q2=0.96) and low error (RMSE=0.212), outperforming artificial neural networks and multiple linear regression models developed on the same data set. The results indicate that SVM is a valuable tool for QSAR modeling and predicting anti-HIV activity of chemical compounds.
This document compares the performance of two neural network architectures, multi-layer perceptron (MLP) and radial basis function (RBF) networks, on a face recognition system. It trains MLP networks using different variants of the backpropagation algorithm and compares the results to RBF networks. The document finds that RBF networks provide better generalization performance compared to backpropagation algorithms and have faster training times, making them more suitable for face recognition.
Implementation Of Back-Propagation Neural Network For Isolated Bangla Speech ...ijistjournal
This paper is concerned with the development of Back-propagation Neural Network for Bangla Speech Recognition. In this paper, ten bangla digits were recorded from ten speakers and have been recognized. The features of these speech digits were extracted by the method of Mel Frequency Cepstral Coefficient (MFCC) analysis. The mfcc features of five speakers were used to train the network with Back propagation algorithm. The mfcc features of ten bangla digit speeches, from 0 to 9, of another five speakers were used to test the system. All the methods and algorithms used in this research were implemented using the features of Turbo C and C++ languages. From our investigation it is seen that the developed system can successfully encode and analyze the mfcc features of the speech signal to recognition. The developed system achieved recognition rate about 96.332% for known speakers (i.e., speaker dependent) and 92% for unknown speakers (i.e., speaker independent).
Implementation Of Back-Propagation Neural Network For Isolated Bangla Speech ...ijistjournal
This document describes the implementation of a back-propagation neural network for isolated Bangla speech recognition. The network was trained on Mel Frequency Cepstral Coefficient (MFCC) features extracted from recordings of 10 Bangla digits spoken by 10 speakers. The network architecture included an input layer of 250 neurons, a hidden layer of 16 neurons, and an output layer of 10 neurons. The network was trained using backpropagation and achieved a recognition rate of 96.3% for known speakers and 92% for unknown speakers. The system demonstrates the potential for developing speaker-independent isolated digit speech recognition in Bangla.
Black-box modeling of nonlinear system using evolutionary neural NARX modelIJECEIAES
Nonlinear systems with uncertainty and disturbance are very difficult to model using mathematic approach. Therefore, a black-box modeling approach without any prior knowledge is necessary. There are some modeling approaches have been used to develop a black box model such as fuzzy logic, neural network, and evolution algorithms. In this paper, an evolutionary neural network by combining a neural network and a modified differential evolution algorithm is applied to model a nonlinear system. The feasibility and effectiveness of the proposed modeling are tested on a piezoelectric actuator SISO system and an experimental quadruple tank MIMO system.
Similar to Adaptive Training of Radial Basis Function Networks Based on Cooperative (20)
This document discusses self-organizing neural networks, including Kohonen networks and Adaptive Resonance Theory (ART). It provides details on Kohonen networks such as their basic structure, learning algorithm using neighborhoods, and biological origins. ART is introduced as a way to address the stability-plasticity dilemma in neural networks. The key aspects of ART1 are summarized, including its orienting and attentional subsystems, short and long term memory representations, and learning algorithm using a vigilance test. Examples of a Kohonen network and ART1 network are also included to illustrate their operation.
Los mapas autoorganizativos (SOFM) son redes neuronales que aprenden a clasificar vectores de entrada en grupos similares. La red determina la neurona ganadora más cercana al vector de entrada y actualiza los pesos de esa neurona y sus vecinas para que se asemejen más al vector de entrada. Esto causa que las neuronas vecinas aprendan vectores similares y la red se autoorganice para clasificar uniformemente el espacio de entrada. Varias técnicas como reducir gradualmente el tamaño del vecindario y el índice de aprend
Este documento describe los mapas autorganizativos y el algoritmo de Kohonen. Los mapas autorganizativos realizan aprendizaje no supervisado para representar datos de entrada de alta dimensionalidad en una red de baja dimensionalidad. El algoritmo de Kohonen itera sobre los datos de entrada y ajusta los pesos de la unidad ganadora y sus vecinas para que se parezcan más al dato de entrada. Esto mapea datos similares a unidades adyacentes en la red.
This document describes a self-organizing neural system called ART-TEXTURE that is developed to categorize and classify textured image regions. ART-TEXTURE specializes existing FCD and ART models to achieve high competence in classifying textured scenes without unnecessary mechanisms. As the properties of its component models are "emergent" due to interactions, ART-TEXTURE exhibits new emergent properties for texture classification that are more than just the sum of its parts.
This document discusses self-organizing neural networks, including Kohonen networks and Adaptive Resonance Theory (ART). Kohonen networks use competitive learning to form topological mappings between input and output layers. Neighboring units respond to similar inputs, and learning updates weights of both the winning unit and its neighbors. ART networks learn stable recognition codes in response to input sequences and address the stability-plasticity dilemma by resetting matches that fail a vigilance test.
El documento describe la red neuronal Kohonen, que tiene la capacidad de formar mapas topológicos de las características de entrada similar a como el cerebro representa información. La red Kohonen aprende de forma no supervisada para clasificar patrones de entrada en grupos basados en su similitud, asignando cada grupo a una neurona de salida. El aprendizaje modifica los pesos de las conexiones para que los patrones similares activen neuronas cercanas en la capa de salida.
Este documento describe la teoría de resonancia adaptativa y las redes ART. Explica que las redes ART resuelven el dilema de la estabilidad y plasticidad del aprendizaje mediante un mecanismo de realimentación entre las capas de entrada y salida. Describe la arquitectura básica de una red ART, la cual incluye un subsistema de atención para clasificación y uno de orientación para crear nuevas categorías. También resume diversas adaptaciones de las redes ART desarrolladas para diferentes aplicaciones como el reconocimiento de patrones.
Este documento describe el funcionamiento de una red neuronal artificial con 4 neuronas de entrada y 2 de salida para clasificar patrones binarios. Se inicializan los pesos de las conexiones y se aplican 3 vectores de entrada como ejemplos. Luego, se actualizan los pesos a medida que la red clasifica los patrones de entrada iterativamente.
Este documento describe el Modelo de Resonancia Adaptativa (ART) creado por Stephen Grossberg para permitir que las redes neurales aprendan nuevos patrones de manera plástica mientras retienen patrones previamente aprendidos de forma estable. El modelo ART utiliza una competición entre neuronas para categorizar los patrones de entrada y ajustar los pesos de la red para mejorar la categorización.
Adaptive Resonance Theory (ART) is an unsupervised neural network designed to overcome the stability-plasticity dilemma. ART networks can dynamically classify input data into stable clusters while remaining plastic to learn new clusters. ART-1 specifically handles binary input vectors using a fast, self-organizing hypothesis testing cycle between short-term memory layers F1 and F2. The vigilance parameter controls how closely top-down expectations from F2 must match bottom-up input patterns from F1 before F2 resets and the cycle repeats to find a better match.
La teoría de resonancia adaptativa propone que las redes neuronales pueden aprender nueva información sin olvidar lo aprendido anteriormente mediante la adición de un mecanismo de realimentación entre la capa de entrada y la capa competitiva. La red ART logra esto al alcanzar un estado resonante entre las capas que permite el aprendizaje solo cuando se reconoce rápidamente la entrada, o cuando la entrada es desconocida para crear una nueva representación.
Este documento presenta una introducción al neocognitrón, una arquitectura de red neuronal artificial propuesta para el reconocimiento de caracteres escritos a mano. El neocognitrón se basa en la organización jerárquica de la corteza visual y consta de múltiples niveles de células simples y complejas. Las células simples extraen características de la capa inferior y las células complejas integran las respuestas de grupos de células simples. El neocognitrón es capaz de reconocer caracteres independientemente de
El documento describe la arquitectura y funcionamiento del neocognitrón, una red neuronal concebida para el reconocimiento de caracteres escritos a mano. El neocognitrón tiene una estructura jerárquica compuesta de capas S y C. Las capas S buscan características visuales básicas mientras que las capas C combinan dichas características. El aprendizaje se realiza mediante ajuste de pesos sin supervisión entre representantes de cada capa. La red resuelve ambigüedades mediante inhibición lateral y reconoce múltiples
The document provides biographical information about Professor Kunihiko Fukushima, a pioneer in the field of neural networks. It describes his invention of the Neocognitron, a hierarchical neural network for deformation invariant pattern recognition. The Neocognitron is able to recognize patterns that have been distorted through partial shifts, rotations, or other transformations. The document also discusses Fukushima's research interests in modeling neural networks to understand visual processing and active vision in the brain.
- In 1975, Kunihiko Fukushima introduced the Cognitron network, which was an extension of the original perceptron and was able to handle pattern recognition problems better than the perceptron.
- The Cognitron used multiple layers of convergent subcircuits that allowed it to discriminate between patterns to some degree, unlike the perceptron.
- Fukushima later modified the Cognitron into the Neocognitron in 1980 by adding additional summation nodes, which made the network able to recognize patterns regardless of their position in the visual field.
The counterpropagation network consists of three layers - an input layer, a hidden Kohonen layer, and an output Grossberg layer. The Kohonen layer uses competitive learning to categorize input patterns in an unsupervised manner. During operation, the input pattern activates a single node in the Kohonen layer, which then activates the appropriate output pattern in the Grossberg layer. Effectively, the counterpropagation network acts as a lookup table to map input patterns to associated output patterns by determining which stored pattern category the input belongs to.
The CounterPropagation algorithm updates a neural network with an input, hidden, and output layer. It identifies the hidden neuron with the highest input, setting its activation to 1 and others to 0. The output is then calculated as the weighted sum of the hidden neuron, equal to the weight of the link between the winner hidden neuron and the output neurons. This update works with the CounterPropagation learning function to train the network.
La Counterpropagation es una red neuronal que combina aprendizaje supervisado y no supervisado para acelerar el proceso de aprendizaje. Consiste en dos subredes: una red competitiva de Kohonen para la capa oculta, y una red OUTSTAR para conectar la capa oculta a la de salida. El entrenamiento ocurre en dos fases, primero dividiendo los patrones en clusters y luego ajustando los pesos entre las capas oculta y de salida. Esto permite clasificar nuevos patrones más rápido que las redes multicapa entrenadas solo
La red ART2 es una versión continua del modelo ART original propuesto en 1987 que puede clasificar vectores de entrada reales. Funciona con valores de entrada analógicos manteniendo la misma arquitectura que ART1 pero con pesos iguales. Se utiliza para reconocimiento de imágenes, señales y olores. ARTMAP es una arquitectura supervisada que crea categorías estables optimizando la compresión de códigos y minimizando errores predictivos. Se ha aplicado en diagnóstico médico mejorando la atención de emergencia.
La Constitución de los Estados Unidos establece los principios fundamentales del gobierno federal y garantiza ciertos derechos civiles. El Artículo 1 establece el poder legislativo y crea el Congreso de los Estados Unidos, que se compone de una Cámara de Representantes y un Senado.
This presentation was provided by Steph Pollock of The American Psychological Association’s Journals Program, and Damita Snow, of The American Society of Civil Engineers (ASCE), for the initial session of NISO's 2024 Training Series "DEIA in the Scholarly Landscape." Session One: 'Setting Expectations: a DEIA Primer,' was held June 6, 2024.
A review of the growth of the Israel Genealogy Research Association Database Collection for the last 12 months. Our collection is now passed the 3 million mark and still growing. See which archives have contributed the most. See the different types of records we have, and which years have had records added. You can also see what we have for the future.
Main Java[All of the Base Concepts}.docxadhitya5119
This is part 1 of my Java Learning Journey. This Contains Custom methods, classes, constructors, packages, multithreading , try- catch block, finally block and more.
This slide is special for master students (MIBS & MIFB) in UUM. Also useful for readers who are interested in the topic of contemporary Islamic banking.
Physiology and chemistry of skin and pigmentation, hairs, scalp, lips and nail, Cleansing cream, Lotions, Face powders, Face packs, Lipsticks, Bath products, soaps and baby product,
Preparation and standardization of the following : Tonic, Bleaches, Dentifrices and Mouth washes & Tooth Pastes, Cosmetics for Nails.
ISO/IEC 27001, ISO/IEC 42001, and GDPR: Best Practices for Implementation and...PECB
Denis is a dynamic and results-driven Chief Information Officer (CIO) with a distinguished career spanning information systems analysis and technical project management. With a proven track record of spearheading the design and delivery of cutting-edge Information Management solutions, he has consistently elevated business operations, streamlined reporting functions, and maximized process efficiency.
Certified as an ISO/IEC 27001: Information Security Management Systems (ISMS) Lead Implementer, Data Protection Officer, and Cyber Risks Analyst, Denis brings a heightened focus on data security, privacy, and cyber resilience to every endeavor.
His expertise extends across a diverse spectrum of reporting, database, and web development applications, underpinned by an exceptional grasp of data storage and virtualization technologies. His proficiency in application testing, database administration, and data cleansing ensures seamless execution of complex projects.
What sets Denis apart is his comprehensive understanding of Business and Systems Analysis technologies, honed through involvement in all phases of the Software Development Lifecycle (SDLC). From meticulous requirements gathering to precise analysis, innovative design, rigorous development, thorough testing, and successful implementation, he has consistently delivered exceptional results.
Throughout his career, he has taken on multifaceted roles, from leading technical project management teams to owning solutions that drive operational excellence. His conscientious and proactive approach is unwavering, whether he is working independently or collaboratively within a team. His ability to connect with colleagues on a personal level underscores his commitment to fostering a harmonious and productive workplace environment.
Date: May 29, 2024
Tags: Information Security, ISO/IEC 27001, ISO/IEC 42001, Artificial Intelligence, GDPR
-------------------------------------------------------------------------------
Find out more about ISO training and certification services
Training: ISO/IEC 27001 Information Security Management System - EN | PECB
ISO/IEC 42001 Artificial Intelligence Management System - EN | PECB
General Data Protection Regulation (GDPR) - Training Courses - EN | PECB
Webinars: https://pecb.com/webinars
Article: https://pecb.com/article
-------------------------------------------------------------------------------
For more information about PECB:
Website: https://pecb.com/
LinkedIn: https://www.linkedin.com/company/pecb/
Facebook: https://www.facebook.com/PECBInternational/
Slideshare: http://www.slideshare.net/PECBCERTIFICATION
How to Manage Your Lost Opportunities in Odoo 17 CRMCeline George
Odoo 17 CRM allows us to track why we lose sales opportunities with "Lost Reasons." This helps analyze our sales process and identify areas for improvement. Here's how to configure lost reasons in Odoo 17 CRM
A Strategic Approach: GenAI in EducationPeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
The simplified electron and muon model, Oscillating Spacetime: The Foundation...RitikBhardwaj56
Discover the Simplified Electron and Muon Model: A New Wave-Based Approach to Understanding Particles delves into a groundbreaking theory that presents electrons and muons as rotating soliton waves within oscillating spacetime. Geared towards students, researchers, and science buffs, this book breaks down complex ideas into simple explanations. It covers topics such as electron waves, temporal dynamics, and the implications of this model on particle physics. With clear illustrations and easy-to-follow explanations, readers will gain a new outlook on the universe's fundamental nature.
The simplified electron and muon model, Oscillating Spacetime: The Foundation...
Adaptive Training of Radial Basis Function Networks Based on Cooperative
1. Adaptive Training of Radial Basis Function Networks Based on Cooperative
Evolution and Evolutionary Programming
Alexander P. Topchy, Oleg A. Lebedko, Victor V. Miagkikh and Nikola K. Kasabov1
Research Institute for Multiprocessor Computer Systems,
2 Chekhova Str., GSP-284, Taganrog, 347928, Russia, apt@tsure.ru
1
Department of Information Science, University of Otago,
Dunedin, P.O. Box 56, New Zealand, nkasabov@otago.ac.nz
Abstract
Neuro-fuzzy systems based on Radial Basis Function
Networks (RBFN) and other hybrid artificial
intelligence techniques are currently under intensive
investigation. This paper presents a RBFN training
algorithm based on evolutionary programming and
cooperative evolution. The algorithm alternatively
applies basis function adaptation and backpropagation
training until a satisfactory error is achieved. The basis
functions are adjusted through an error goal function
obtained through training and testing of the second part
of the network. The algorithm is tested on bench-mark
data sets. It is applicable to on-line adaptation of
RBFN and building adaptive intelligent systems
1. Introduction
Radial Basis Function Networks became very popular
due to several important advantages over traditional
multilayer perceptrons [1,2,14]:
• Locality of radial basis function and feature
extraction in hidden neurons, that allows usage of
clustering algorithms and independent tuning of
RBFN parameters.
• Sufficiency of one layer of non-linear elements for
establishing arbitrary input-output mapping.
• Solution of clustering problem can be performed
independently from the weights in output layers.
• RBFN output in scarcely trained areas of input
space is not random, but depends on the density of
the pairs in training data set [3].
These properties lead to potentially quicker learning in
comparison to multilayer perceptrons trained by back
propagation. In some extent, RBFNs allow us to
actualise a classical idea about training layer by layer.
The standard approach to RBFN training includes k-
means clustering for calculation of radial functions
centres, P-nearest neighbour heuristic for definition of
cluster widths, and subsequent training of output layer
weights by least squares techniques [4, 14]. The last
step is conventionally implemented by means of direct
methods like singular value decomposition (SVD) or
iterative gradient descent. It has been shown [5] that
such a training procedure converges to a local
minimum of the evaluation function. Thus, the problem
of RBFN learning remains rather complex for large
practical applications, and finding global search
training algorithms is the subject of interest.
Evolutionary simulation is a promising approach to
solving many AI problems. The use of evolutionary
algorithms for neural network parametric and structural
learning has been shown to be efficient in a number of
applications, see e.g. [6]. However, the standard
approach, when the instances in the population are the
networks, has a number of drawbacks. The worst of
them is much larger computational complexity in
comparison to iterative search procedures processing a
single network. Moreover, functional equivalence of
the hidden layer elements leads to redundant genetic
description of the network in traditional genetic
algorithms for parametric optimisation [7].
The alternative approach presented here uses a
population of hidden layer neurons, but not a
population of RBF networks. Similar ideas appear in a
number of recent papers for various types of
architectures and evolutionary paradigms including
multilayer perceptrons trained by means of genetic [8]
and evolutionary programming [9] techniques, RBFN
cooperative competitive genetic training algorithm
[10]. All of these consider neurons in a single network
as a population.
The presented algorithm itself is also based on the
principle of cooperative evolution. The result of such
an algorithm will be not the best instance, but the best
population as a whole. Under the principle of
cooperative evolution, each instance being evolved
solves a part of the problem; and we are interested in
2. obtaining not the best possible instance, but a
population solving the whole problem to obtain an
optimal overall result. Thus, instances have to adapt in
such a way, that their synergy solves the problem. Such
an approach is very natural for neural networks, where
we have many small sub-components achieving the
goal together.
The solution of the RBFN learning problem can be
decomposed into a number of sub-problems:
1. The search for optimal location and size of clusters
in input features space.
2. Definition of parameters (weights and thresholds) of
the output layer by means of gradient procedure or
other methods, like singular values decomposition.
The first sub-problem is approached here by using
cooperative evolution. The RBFN learning (problem
2) is based on the evolutionary programming
paradigm, which employs a cooperative search strategy
for optimal network parameters oriented to pattern
classification tasks.
Evolutionary Programming (EP) [11] is an
evolutionary computational technique. In contrast to
Genetic Algorithms (GA), EP is based on the
assumption that evolution optimizes the behavior of an
instance, but not the underlying genetic code. Thus,
EP is focused on the phenotypic level of evolution.
Mutations in EP are the single source of the
modifications in feasible solutions. Crossover and
similar genetic operators are not used. The offspring in
EP is created from parental solutions by means of
cloning with subsequent mutations. Mutations are
implemented as addition of normally distributed
random values with zero mean and dynamically
adjustable variances to components of solutions.
Standard deviation in mutations is inversely
proportional to the quality of parental solutions. The
selection procedure is also different in EP and can be
viewed as a form of stochastic tournament among
parents and progeny.
The paper is organised as follows. Section 2 is devoted
to the formal statement of RBFN training problem.
Fitness function and radial basis functions crowding
elimination are described in the sections 3 and 4
respectively followed by a description of the algorithm
(section 5), experimental results (section 6) and
conclusion in section 7.
2. The Statement of the RBFN Training
Problem
The activity Fj of each, jth output neuron of RBFN,
depends on the input vector x as follows:
) [ Y [M M LM L
L
/
3.
4. r r
= +
=
∑ω φ
, (1)
where ωj0 is the value of the threshold on jth output; vij
is the weight between the ith hidden neuron and the jth
output; φi – non-linear transformation, performed by
hidden neuron i.
L radial symmetrical basis functions perform non-
linear transformation φi(x)=φ(||x – ci|| /di), where
ci∈ℜ
n
is the centre of the basis function φi, di is
deviation or scaling factor for radius ||x – ci||, and ||·||
is Euclidean norm in ℜ
n
. The Gaussian function φ(r)=
exp(-r
2
/2) is frequently used as non-linear
transformation φ .
RBFN training can be considered as an optimisation
problem, where error function E is an evaluation
function being minimised. E is usually defined as the
average squared deviation of network outputs from
desired output values on given training data set:
( )(
.0
W R
L
.
M
L
M
L
M
0
= −
= =
∑ ∑
(2)
where: K is the number of input-output pairs, t j
i
is the
target value for the output neuron j in reaction to ith
input pattern; oj
i
= Fj (xi) is the actual value generated
by output neuron j after feeding ith input pattern; M is
dimensionality of output space. For convenience, the
target values were limited to {0,1} in our experiments.
3. Fitness Estimation
The main purpose of hidden elements in a network
with conventional architecture solving a classification
problem is to provide separating hyper-surfaces
separating the patterns (the points in input space) of
different classes in such a way that the patterns
belonging to the same class appeared in the same side
only. In networks with radial basis units such a surface
can be thought of as a union of hyper-spheres or
ellipses. The patterns of the class must be grouped by
basis units in such a way that they do no overlap with
patterns of other classes. One of the mentioned
properties of radial basis Gaussian functions is locality
of activity, which means that the influence of the
function φi on some distance from the centre ci can be
neglected. It does not hold for common multilayer
perceptron networks, where the effect of all hidden
5. neurons should be taken into account in each point of
the input space. Locality in RBFN creates an
opportunity to estimate the efficiency of each element
separately from others. As a function for estimation of
the quality of the element φj, the following function
can be used:
H
[
[
M
M N
[ U
M O
O
.
N
=
∈
=
∑
∑
φ
φ
6.
7. r
r
r
Ã
, (3)
where ej is the value of jth element efficiency
(quality); φj(x) is the value on the output of the jth
element in response to presentation of the input
pattern x; xk is the pattern belonging to the class r,
which has maximal sum of activities for jth neuron,
and xl are the patterns of all classes. In other words, in
the course of pattern presentation the partial sums of
activities of a given unit for each class is calculated: Sk
= sum of φ (x) over all x belonging to the class k,
k=1,..,C, where C is the number of classes. Only the
class r with maximal Sr contributes to the nominator
of (3). During fitness calculation its possible to find the
values, which each neuron gets on each of the classes.
Than we find the maximal among them. In other
words, this function defines how much element φj
distinguishes class r from the other classes. The goal
of the learning procedure is to maximize the values of
ej for all hidden neurons. However, the location of
basis units in input space should be different in order
to achieve optimal ‘niching’, i.e. to have no units
performing the same function. ‘Niching’ of neurons
should distribute them over the patterns, which is not
enforced in expression (3). This problem can be solved
by taking into account boundaries between the classes
as discussed in detail below.
Generally, ej is used for guiding the search through the
space of RBF centres and widths, and estimation of the
amount of effort to be spent for improvement of
current clustering. This mechanism of credit
assignment provides the appropriate direction for
search by means of evolutionary programming.
Another advantage of the fitness function is that it does
not require the values of output layer weights to be
calculated. Thus, the search for the best values for the
centres can be performed before training of the
parameters of the output layer, that significantly
reduces the complexity. The output layer training is
only required for determination of the total error of the
network (2), which is used in the termination condition
of the algorithm.
4. RBF Crowding Avoidance
In the cooperative approach to NN learning the
problem of “division of the work” among the neurons
should be solved. There should not exist elements,
which performs identical functions in pattern
classification. If such competing neurons exist, some
of them should be changed in a way that they perform
more useful functions, which comprises the process of
‘niching’.
It is obvious that function (3) does not satisfy this
requirement. There exist local maximums, which many
elements tend to occupy. These maximums have basins
of attraction of different sizes, which invoke crowding
of several Gaussians in the same area. However, some
other areas could remain uncovered. The simplest way
of solving this problem is to calculate the distance || ci
– cj || between the centres of the Gaussians and
compare it with threshold distance. If the distance
between the Gaussians φi and φj is less than this
threshold then RBFs are considered to be competitive.
However, this way is not invariant in respect to the
distance between the patterns of different classes.
More adequate measure of the overlapping between
two elements φi and φj can be expressed by the
following function:
5
[ [
[ [
L M
L N M N
N
.
L N
N
.
M O
O
.
Ã
= =
= =
∑
∑ ∑
φ φ
φ φ
8.
9.
10.
11. r r
r r! !
(4)
which measures orthogonality of normalised neuron
activities. It approaches zero for totally non-
overlapping neurons and equals to 1 for neurons
performing identical functions. However, such a
crowding function is computationally expensive to be
applied on large number of training patterns. A trade-
off heuristic for determining overlapping units is used
here instead. It compares only the patterns for which a
neuron’s output is maximal (and next to maximal in
our implementation). Thus it requires only one (or two
in our case) additional comparisons for each pattern. In
the general case in (4), only n (nK) points can be
taken into account, for which the neuron’s output has
maximal value. If the obtained value is greater than 0,
then the two elements are considered to be
overlapping. This is an efficient and inexpensive way
to find approximate values for Rij .
In the presented below the elements φi and φj are
considered to be competing, if they are the most close
to the same pattern xk (or n patterns as mentioned
above). If competing elements are found, the one
having maximal value of the fitness e has to be kept
unchanged and the rest of them can be modified. Since
12. we can easily find the patterns, having the maximum
impact into the error during output layer training, this
information can be used for placement of the centres of
the elements to be changed in the points,
corresponding to such patterns.
5. The Description of the Algorithm
The pseudo-code of the algorithm can be outlined as
follows:
I. FOR each number of basis unit (centres) from a
given set DO the following steps:
1. Generation of initial values for centres and
deviations of all elements φj.
2. Calculate the efficiency ej of each basis unit.
3. Train output layer by I iterations of gradient
procedure.
4. Find total error E of the network.
5. If E is less than desired threshold, then go to II.
6. Find elements performing almost identical
functions. If there are no such units go to II.
7. Reallocate crowded Gaussians to the areas the
worst classified patterns.
8. Generate a new offspring of basis function neurons.
9. Calculate the fitness of the new offspring of
neurons ej.
10. Choose the better population between the offspring
and the parent.
11. Go to step 3.
II. Select the optimum RBFN’s structure having
number of centres with a minimum total error E.
The training of the output layer is by a small number of
gradient descent iterations (I steps of delta-rule, the
value of I being incremented in the example below
every 20 generations starting from 5.
Mutation of the parent neuron and creation of the
offspring is performed in step 8. Definition of the
centres and deviations of offspring Gaussians are
calculated in accordance with the following
expressions:
F F 1
(
H
G G 1
(
HLM LM
L
L L
L
= +
= +
α β (4)
where cij is the jth component of the ith neuron; N(a,b)
is normal distribution with mean a and variance b; α, β
are the scaling factors; E is the error of the network. In
experiments these parameters were set to α=0,16 and
β=0,05.
Since, in contrast to the corresponding learning
algorithms with logistic functions [9], it is possible to
perform evaluation and selection of the Gaussians of
the same parent independently from the others, several
offspring can be processed during one generation. This
is attributed to the local character of the basis functions
and on the premises function (3) is derived. The
selection procedure performed in step 10 can be
performed either probabilistically or deterministically.
If several offspring are created, different tournament-
like strategies can be used. However, it is possible to
perform deterministic selection too.
6. Experimental results
We are presenting the results for two well-known
classification problems. The iris data classification
problem, used by Fisher in 1936, remains the standard
benchmark for testing pattern classification methods.
The data set contains 150 patterns of three classes (50
for each class). Four continuous features correspond to
sepal width and length, petal width and length. Three
classes of plant are versicolour, setosa and virginica.
One of them is linearly separable from two others.
RBFNs with 5, 10, 15, 20 and 25 radial Gaussian
elements, four inputs and three outputs were used for
classification. Results were averaged over 10 trials.
Obtained measurements were compared with the
traditional k-means clustering with SVD training and
with a cooperative, competitive genetic training as in
[10]. Fig.1 shows the number of miss-classified
patterns in the training data set after approximately
equal amount of CPU time corresponding to 100
generations of our algorithm. In each generation we
evaluate two offspring networks, therefore the average
number of objective function evaluations is two times
bigger than the number of generations. It is clear that
for larger network the relative performance of the
algorithm increases. Total classification has been
achieved for all runs with 25 elements in contrast to
other methods.
Locations and relative widths of basis units for RBFN
with 5 elements are shown in a 2D projection in the
feature 3 and feature 4 plane, at the end of training in
Fig. 2.
Figure 3 shows the dependence of the total network
error on the number of iteration for different number of
radial basis functions L. One iteration corresponds on
average to 2 presentations of the training data set.
MSE decreases quickly and reaches required for this
13. problem level 0.1 after 10-40 iterations. In general, the
time for getting the error threshold MSE = 0.1 was
comparable with the time of k-means algorithm.
5 10 15 20 25
0
2
4
6
8
10
k-means
C-C GA
Cooperative EP
Ave.No.ofMisclassifiedPatterns
Number of RBF-units
Figure 1: Iris problem: Comparing different
algorithms after 100 generations of cooperative EP
1 2 3 4 5 6 7
0,0
0,5
1,0
1,5
2,0
2,5
iris-setosa
iris-versicolor
iris-virginica
Feature4:petalwidth,cm
Feature 3: petal length, cm
Figure2: Distribution of basis elements being evolved.
After reaching certain value of the error, the learning
rate significantly decreases. This is attributed to the
property of EP as a global search algorithm, to find
quickly the value in the vicinity of optimum, but then
spend a rather long time for final tuning. Several
iterations of the gradient search procedure can be used
to speed up the convergence to final values.
Other experiments were performed on the Glass Test
problem. This problem has nine features for each of
214 patterns, divided in training and testing set, to be
separated into 6 classes. The results of training after
200 generations were equal to results obtained on MLP
with architecture 9-16-8-6 (see e.g. [12]) with 572
weights and sigmoidal non-linearities.
0 20 40 60 80 100 120
0,00
0,05
0,10
0,15
0,20
0,25
20 neurons
10 neurons
5 neurons
NormalizedMSE
Generations
Figure 3: Iris problem. Convergence plot for various
number of basis units
The authors have chosen an RBFN with 36 Gaussian
elements having almost the same number of adjustable
parameters. The values of MSE on training and testing
data sets show approximately equal generalisation
properties for MLP and RBFN for the given test
problem. However, the RBFN error for the training
data set is less after the same training time than the one
for the MLP from [12].
7. Conclusion
The paper presents a novel algorithm for evolutionary
training of RBFN. Evolution of a single network as a
population of neurons, but not a collection of
networks, seems to be an approach allowing avoidance
of large computational complexity in traditional
evolutionary algorithms for NN training. Moreover,
many difficulties associated with standard procedures
of genetic encoding can be successfully solved.
However, this approach requires more precise analysis
of the network decomposition and introduction of
relative fitness functions for the elements. In this
paper, RBFN adaptation is performed by means of
evolutionary programming. The described algorithm
for classification problems is competitive with the
traditional RBFN training techniques, and shows better
results for problems with large dimensionality.
A strong advantage of the new algorithm is its ability
to gradually change and adapt basis functions within
the learning procedure which includes alternative
refinement of the basis functions and a gradient
descent training. This advantage makes it applicable to
on-line adaptive systems.
The presented learning algorithm can be easily
extended and applied for the solution of a large class
14. of approximation problems by changing the fitness
function. In this way, the described evolutionary
learning approach can be used in almost every
application of RBFN from control to image processing.
In particular, this algorithm was used in neuro-fuzzy
system for production sales analysis. The algorithm
can be applied to other neuro-fuzzy architectures such
as the fuzzy neural network FuNN [13,14,15]. A
significant difference between the RBFN and the
FuNN architectures is that the former is based on basis
units which define cluster centres in the whole input
space, while the latter one uses fuzzy membership
functions for quontisation of the space of each
individual input variable. FuNN has the advantage of
adjusting the membership functions and the fuzzy rules
embedded in its structure during the operation of the
system (on-line) [15,16]. In addition to expanding the
number of the basis units and finding their optimum
number for a particular training data set, an on-line
adaptive RBFNs and FuNNs structures are being
developed at present which allow for ‘shrinking’, so
the number of the basis units (membership functions in
the FuNN case) can be reduced if necessary according
to new data coming on-line.
Acknowledgements
This research is supported by Research Institute for
Multiprocessor Computer Systems, Taganrog, Russia,
and partially supported by a research grant PGSF
UOO-606 from the New Zealand Foundation for
Research Science and Technology.
References
[1] M. J. D. Powell, The Theory of Radial Basis
Functions Approximation, in Advances of
Numerical Analysis, pp. 105–210, Oxford:
Clarendon Press, 1992.
[2] F. Girosi, Some Extensions of Radial Basis
Functions and their Applications in Artificial
Intelligence, Computers Math. Applic., vol. 24,
no. 12, pp. 61-80, 1992.
[3] J.A. Leonard, M.A. Kramer and L.H. Ungar,
Using Radial Basis Functions to Approximate a
Function and Its Error Bounds, IEEE Trans. on
Neural Netwirks, vol.3, no. 4, pp.624-627, 1992.
[4] J. Moody and C. J. Darken, Fast Learning in
Networks of Locally Tuned Processing Units,
Neural Computation, vol. 1, pp. 281–294, 1989.
[5] Y. Linde, A. Buzo and R. Gray, An Algorithm for
Vector Quantizer Design, Proc. Of IEEE, Com-
28 (1), pp. 84-95, 1980.
[6] Schaffer, D. Whitley and L.J. Eshelman,
Combinations of genetic algorithms and neural
networks: A survey of the state of the art, in
Combinations of Genetic Algorithms and Neural
Networks, pp. 1-37, IEEE Computer Society
Press, 1992.
[7] J. Angeline, G.M. Saunders, and J.B. Pollak, An
evolutionary algorithm that constructs recurrent
neural networks, IEEE Transactions on Neural
Networks, vol.5, no. 1, pp.54-65, 1994.
[8] D.Prados, A fast supervised learning algorithm
for large multilayered neural networks, in
Proceedings of 1993 IEEE International
Conference on Neural Networks, San Francisco,
v.2, pp.778-782, 1993
[9] A.Topchy, O.Lebedko, V. Miagkikh, Fast
Learning in Multilayered Neural Networks by
Means of Hybrid Evolutionary and Gradient
Algorithm, in Proc. of the First Int. Conf. on
Evolutionary Computations and Its Applications,
ed. E. D. Goodman et al., (RAN, Moscow),
pp.390–399, 1996.
[10] B. A. Whitehead and T.D. Choate, Cooperative -
Competitive Genetic Evolution of Radial Basis
Function Centres and Widths for Time Series
Prediction, IEEE Transactions on Neural
Networks, vol. 7, no. 8, pp.869-880, 1996.
[11] Fogel L.J., Owens A.J. and Walsh M.J.
“Artificial Intelligence through Simulated
Evolution”, John Wiley Sons, 1966.
[12] L. Prechelt, Proben1-A set of neural network
benchmark problems and rules, University
Karlsruhe, Technical Report 21/94, 1994
[13 N. Kasabov, Kozma, R., Watts, M. Optimisation
and adaptation of fuzzy neural networks through
genetic algorithms and learning- with- forgetting
methods and applications for phoneme-based
speech recognition. Information Sciences (1997)
accepted
[14] N. Kasabov, Foundations of Neural networks,
Fuzzy Systems and Knowledge Engineering, MIT
Press, 1996
[15] N. Kasabov, Kim, JS, Watts, M. and Gray, A.
FuNN/2 - A fuzzy neural network architecture for
adaptive learning and knowledge acquisition.
Information Sciences: Applications ,1997, in print