This document discusses using repeated simulations of a crisp neural network to obtain quasi-fuzzy weight sets (QFWS) that can be used to initialize fuzzy neural networks. The key points are:
1) A crisp neural network is repeatedly trained on input-output data to model an unknown function. The connection weights change with each simulation.
2) Recording the weights from multiple simulations produces quasi-fuzzy weight sets, where each weight is a fuzzy set rather than a single value.
3) These QFWS can provide initial solutions for training type-I fuzzy neural networks with reduced computational complexity compared to random initialization.
4) The QFWS follow fuzzy arithmetic and allow both numerical and linguistic data to
This document discusses kernel methods and radial basis function (RBF) networks. It begins with an introduction and overview of Cover's theory of separability of patterns. It then revisits the XOR problem and shows how it can be solved using Gaussian hidden functions. The interpolation problem is explained and how RBF networks can perform strict interpolation through a set of training data points. Radial basis functions that satisfy Micchelli's theorem allowing for a nonsingular interpolation matrix are presented. Finally, the structure and training of RBF networks using k-means clustering and recursive least squares estimation is covered.
This document provides an overview of multilayer perceptrons (MLPs) and the backpropagation algorithm. It defines MLPs as neural networks with multiple hidden layers that can solve nonlinear problems. The backpropagation algorithm is introduced as a method for training MLPs by propagating error signals backward from the output to inner layers. Key steps include calculating the error at each neuron, determining the gradient to update weights, and using this to minimize overall network error through iterative weight adjustment.
This document discusses support vector machines (SVMs) for pattern classification. It begins with an introduction to SVMs, noting that they construct a hyperplane to maximize the margin of separation between positive and negative examples. It then covers finding the optimal hyperplane for linearly separable and nonseparable patterns, including allowing some errors in classification. The document discusses solving the optimization problem using quadratic programming and Lagrange multipliers. It also introduces the kernel trick for applying SVMs to non-linear decision boundaries using a kernel function to map data to a higher-dimensional feature space. Examples are provided of applying SVMs to the XOR problem and computer experiments classifying a double moon dataset.
A Threshold Logic Unit (TLU) is a mathematical function conceived as a crude model, or abstraction of biological neurons. Threshold logic units are the constitutive units in an artificial neural network. In this paper a positive clock-edge triggered T flip-flop is designed using Perceptron Learning Algorithm, which is a basic design algorithm of threshold logic units. Then this T flip-flop is used to design a two-bit up-counter that goes through the states 0, 1, 2, 3, 0, 1… Ultimately, the goal is to show how to design simple logic units based on threshold logic based perceptron concepts.
The document discusses regression models for modeling relationships between input and output variables. It covers linear regression, using linear functions to model the relationship, and nonlinear regression, using nonlinear functions. Maximum a posteriori (MAP) estimation and least squares estimation are described as approaches for estimating the parameters of regression models from data. MAP estimation maximizes the posterior probability of the parameters given the data and assumes prior probabilities on the parameters, while least squares minimizes error. Regularized least squares is also covered, which adds a regularization term to improve stability. Computer experiments are demonstrated applying linear regression to classification problems.
Improving Performance of Back propagation Learning Algorithmijsrd.com
The standard back-propagation algorithm is one of the most widely used algorithm for training feed-forward neural networks. One major drawback of this algorithm is it might fall into local minima and slow convergence rate. Natural gradient descent is principal method for solving nonlinear function is presented and is combined with the modified back-propagation algorithm yielding a new fast training multilayer algorithm. This paper describes new approach to natural gradient learning in which the number of parameters necessary is much smaller than the natural gradient algorithm. This new method exploits the algebraic structure of the parameter space to reduce the space and time complexity of algorithm and improve its performance.
Kernal based speaker specific feature extraction and its applications in iTau...TELKOMNIKA JOURNAL
This document summarizes kernel-based speaker recognition techniques for an automatic speaker recognition system (ASR) in iTaukei cross-language speech recognition. It discusses kernel principal component analysis (KPCA), kernel independent component analysis (KICA), and kernel linear discriminant analysis (KLDA) for nonlinear speaker-specific feature extraction to improve ASR classification rates. Evaluation of the ASR system using these techniques on a Japanese language corpus and self-recorded iTaukei corpus showed that KLDA achieved the best performance, with an equal error rate improvement of up to 8.51% compared to KPCA and KICA.
This document contains slides from a lecture on pattern recognition. It discusses several topics:
- Maximum likelihood estimation and how it can be used to estimate parameters of Gaussian distributions from sample data.
- The problem of dimensionality when applying pattern recognition techniques - as the number of features or dimensions increases, classification accuracy may decrease and computational complexity increases.
- Component analysis techniques like PCA and LDA that aim to reduce dimensionality by projecting data onto a lower-dimensional space.
- An assignment involving generating an image with multiple classes, estimating class parameters with MLE, and classifying pixels with Bayesian decision theory.
This document discusses kernel methods and radial basis function (RBF) networks. It begins with an introduction and overview of Cover's theory of separability of patterns. It then revisits the XOR problem and shows how it can be solved using Gaussian hidden functions. The interpolation problem is explained and how RBF networks can perform strict interpolation through a set of training data points. Radial basis functions that satisfy Micchelli's theorem allowing for a nonsingular interpolation matrix are presented. Finally, the structure and training of RBF networks using k-means clustering and recursive least squares estimation is covered.
This document provides an overview of multilayer perceptrons (MLPs) and the backpropagation algorithm. It defines MLPs as neural networks with multiple hidden layers that can solve nonlinear problems. The backpropagation algorithm is introduced as a method for training MLPs by propagating error signals backward from the output to inner layers. Key steps include calculating the error at each neuron, determining the gradient to update weights, and using this to minimize overall network error through iterative weight adjustment.
This document discusses support vector machines (SVMs) for pattern classification. It begins with an introduction to SVMs, noting that they construct a hyperplane to maximize the margin of separation between positive and negative examples. It then covers finding the optimal hyperplane for linearly separable and nonseparable patterns, including allowing some errors in classification. The document discusses solving the optimization problem using quadratic programming and Lagrange multipliers. It also introduces the kernel trick for applying SVMs to non-linear decision boundaries using a kernel function to map data to a higher-dimensional feature space. Examples are provided of applying SVMs to the XOR problem and computer experiments classifying a double moon dataset.
A Threshold Logic Unit (TLU) is a mathematical function conceived as a crude model, or abstraction of biological neurons. Threshold logic units are the constitutive units in an artificial neural network. In this paper a positive clock-edge triggered T flip-flop is designed using Perceptron Learning Algorithm, which is a basic design algorithm of threshold logic units. Then this T flip-flop is used to design a two-bit up-counter that goes through the states 0, 1, 2, 3, 0, 1… Ultimately, the goal is to show how to design simple logic units based on threshold logic based perceptron concepts.
The document discusses regression models for modeling relationships between input and output variables. It covers linear regression, using linear functions to model the relationship, and nonlinear regression, using nonlinear functions. Maximum a posteriori (MAP) estimation and least squares estimation are described as approaches for estimating the parameters of regression models from data. MAP estimation maximizes the posterior probability of the parameters given the data and assumes prior probabilities on the parameters, while least squares minimizes error. Regularized least squares is also covered, which adds a regularization term to improve stability. Computer experiments are demonstrated applying linear regression to classification problems.
Improving Performance of Back propagation Learning Algorithmijsrd.com
The standard back-propagation algorithm is one of the most widely used algorithm for training feed-forward neural networks. One major drawback of this algorithm is it might fall into local minima and slow convergence rate. Natural gradient descent is principal method for solving nonlinear function is presented and is combined with the modified back-propagation algorithm yielding a new fast training multilayer algorithm. This paper describes new approach to natural gradient learning in which the number of parameters necessary is much smaller than the natural gradient algorithm. This new method exploits the algebraic structure of the parameter space to reduce the space and time complexity of algorithm and improve its performance.
Kernal based speaker specific feature extraction and its applications in iTau...TELKOMNIKA JOURNAL
This document summarizes kernel-based speaker recognition techniques for an automatic speaker recognition system (ASR) in iTaukei cross-language speech recognition. It discusses kernel principal component analysis (KPCA), kernel independent component analysis (KICA), and kernel linear discriminant analysis (KLDA) for nonlinear speaker-specific feature extraction to improve ASR classification rates. Evaluation of the ASR system using these techniques on a Japanese language corpus and self-recorded iTaukei corpus showed that KLDA achieved the best performance, with an equal error rate improvement of up to 8.51% compared to KPCA and KICA.
This document contains slides from a lecture on pattern recognition. It discusses several topics:
- Maximum likelihood estimation and how it can be used to estimate parameters of Gaussian distributions from sample data.
- The problem of dimensionality when applying pattern recognition techniques - as the number of features or dimensions increases, classification accuracy may decrease and computational complexity increases.
- Component analysis techniques like PCA and LDA that aim to reduce dimensionality by projecting data onto a lower-dimensional space.
- An assignment involving generating an image with multiple classes, estimating class parameters with MLE, and classifying pixels with Bayesian decision theory.
This document provides an introduction to deep learning. It discusses how deep learning uses multiple layers of nonlinear processing to automatically extract features from data, avoiding the need for manual feature engineering. Deep belief networks, which are composed of stacked restricted Boltzmann machines, are a widely used deep learning model. Training deep networks is challenging, but this is addressed by an unsupervised layer-wise pretraining approach followed by supervised fine-tuning of the entire network. The document reviews literature on deep learning models and applications.
TFFN: Two Hidden Layer Feed Forward Network using the randomness of Extreme L...Nimai Chand Das Adhikari
The learning speed of the feed forward neural
network takes a lot of time to be trained which is a major
drawback in their applications since the past decades. The
key reasons behind may be due to the slow gradient-based
learning algorithms which are extensively used to train the
neural networks or due to the parameters in the networks
which are tuned iteratively using some learning algorithms.
Thus, in order to eradicate the above pitfalls, a new learning
algorithm was proposed known as Extreme Learning Machines
(ELM). This algorithm tries to compute Hidden-layer-output
matrix that is made of randomly assigned input layer and
hidden layer weights and randomly assigned biases. Unlike the
other feedforward networks, ELM has the access of the whole
training dataset before going into the computation part. Here,
we have devised a new two-layer-feedforward network (TFFN)
for ELM in a new manner with randomly assigning the weights
and biases in both the hidden layers, which then calculates the
output-hidden layer weights using the Moore-Penrose generalized
inverse. TFFN doesn’t restricts the algorithm to fix the number
of hidden neurons that the algorithm should have. Rather it
searches the space which gives an optimized result in the neurons
combination in both the hidden layers. This algorithm provides a
good generalization capability than the parent Extreme Learning
Machines at an extremely fast learning speed. Here, we have
experimented the algorithm on various types of datasets and
various popular algorithm to find the performances and report
a comparison.
This document discusses neural networks and how they are used to solve classification problems. It covers the basics of multilayer perceptrons, how the weights are learned using an error-based learning rule called steepest descent, and how adding hidden layers allows neural networks to solve problems that single-layer perceptrons cannot, such as the XOR problem. It also discusses how the thresholds of units are treated as additional weights that are learned during training.
A fusion of soft expert set and matrix modelseSAT Journals
Abstract
The purpose of this paper is to define different types of matrices in the light of soft expert sets. We then propose a decision making
model based on soft expert set.
Keywords: Soft set, soft expert set, Soft Expert matrix.
GENERAL REGRESSION NEURAL NETWORK BASED POS TAGGING FOR NEPALI TEXTcscpconf
This article presents Part of Speech tagging for Nepali text using General Regression Neural
Network (GRNN). The corpus is divided into two parts viz. training and testing. The network is
trained and validated on both training and testing data. It is observed that 96.13% words are
correctly being tagged on training set whereas 74.38% words are tagged correctly on testing
data set using GRNN. The result is compared with the traditional Viterbi algorithm based on
Hidden Markov Model. Viterbi algorithm yields 97.2% and 40% classification accuracies on
training and testing data sets respectively. GRNN based POS Tagger is more consistent than the
traditional Viterbi decoding technique.
Composite Field Multiplier based on Look-Up Table for Elliptic Curve Cryptogr...Marisa Paryasto
This document discusses implementing elliptic curve cryptography using composite fields. It proposes using a 299-bit key represented in the composite field GF((213)23) instead of the conventional GF(2299). This breaks the finite field multiplication into smaller chunks by dividing the field into a ground field and extension field. A lookup table is used for multiplication in the ground field GF(213) while a classic multiplier is used for the extension field GF(23). This composite field approach aims to provide better time and area efficiency for implementation on FPGAs compared to a single large multiplier. The document provides background on elliptic curves, finite fields, and previous work on composite field representations.
A Learning Linguistic Teaching Control for a Multi-Area Electric Power SystemCSCJournals
This paper presents a new methodology for designing a neuro-fuzzy control for complex physical systems. By developing a Neural -Fuzzy system learning with linguistic teaching signals. The advantage of this technique is that, produce a simple and well-performing system because it selects the fuzzy sets and the numerical numbers and process both numerical and linguistic information. This approach is able to process and learn numerical information as well as linguistic information. The proposed control scheme is applied to a multi-area power system with hydraulic and thermal turbines.
This document discusses Bayesian decision theory and classifiers that use discriminant functions. It covers several key topics:
1. Classifiers can be represented by discriminant functions gi(x) that assign vectors x to classes based on their values. The functions divide the space into decision regions.
2. Discriminant functions gi(x) are not unique and can be scaled or shifted without changing decisions.
3. Examples of discriminant functions include posterior probabilities P(ωi | x), likelihood functions P(x | ωi)P(ωi), and risk functions.
4. The two-category case uses a single discriminant function g(x) = g1(x) - g2
Here is my class on the multilayer perceptron where I look at the following:
1.- The entire backproagation algorithm based in the gradient descent
However, I am planning the tanning based in Kalman filters.
2.- The use of matrix computations to simplify the implementations.
I hope you enjoy it.
This document contains lecture notes on sparse autoencoders. It begins with an introduction describing the limitations of supervised learning and the need for algorithms that can automatically learn feature representations from unlabeled data. The notes then state that sparse autoencoders are one approach to learn features from unlabeled data, and describe the organization of the rest of the notes. The notes will cover feedforward neural networks, backpropagation for supervised learning, autoencoders for unsupervised learning, and how sparse autoencoders are derived from these concepts.
This document summarizes a study on pattern recognition and learning in networks of coupled bistable units. The network is composed of N oscillators moving in a double-well potential, with pair-wise interactions between all elements. Two methods are used for training the network: (1) constructing the coupling matrix using Hebb's rule based on stored patterns, and (2) iteratively updating the matrix to minimize error between applied and desired patterns. Graphs show the learning rate converges as mean squared error and coupling strengths decrease over iterations.
The document discusses using clustering models like subtractive fuzzy clustering (SFC) and fuzzy c-means clustering (FCM) to generate an adaptive neuro-fuzzy inference system (ANFIS) for medical diagnoses. Experimental results on medical diagnosis datasets show that ANFIS models using SFC and FCM clustering (ANFIS-SFC and ANFIS-FCM) had better average training and checking errors compared to ANFIS without clustering. Specifically, ANFIS-SFC performed best using backpropagation learning, while ANFIS-FCM performed best using a hybrid learning model. Clustering the datasets without ANFIS was also able to identify different disease clusters.
This document discusses a fusion of soft expert set and matrix models. It begins by introducing soft sets, soft expert sets, fuzzy soft sets, and intuitionistic fuzzy soft sets. It then defines various types of matrices in the context of soft expert sets, including soft expert matrices, soft expert equal matrices, soft expert complement matrices, and operations on soft expert matrices like addition, subtraction, and multiplication. An example is provided to illustrate a soft expert matrix model for a manufacturing firm choosing a location based on expert opinions. The document aims to provide a new dimension to soft expert sets through the use of matrices to solve decision making problems.
The document discusses neural networks based on competition. It describes three fixed-weight competitive neural networks: Maxnet, Mexican Hat, and Hamming Net. Maxnet uses winner-take-all competition where only the neuron with the largest activation remains active. The Mexican Hat network enhances the activation of neurons receiving a stronger external signal by applying positive weights to nearby neurons and negative weights to those further away. An example demonstrates how the Mexican Hat network increases contrast over iterations.
The numerical solution of Huxley equation by the use of two finite difference methods is done. The first one is the explicit scheme and the second one is the Crank-Nicholson scheme. The comparison between the two methods showed that the explicit scheme is easier and has faster convergence while the Crank-Nicholson scheme is more accurate. In addition, the stability analysis using Fourier (von Neumann) method of two schemes is investigated. The resulting analysis showed that the first scheme
is conditionally stable if, r ≤ 2 − aβ∆t , ∆t ≤ 2(∆x)2 and the second
scheme is unconditionally stable.
Improved Parallel Prefix Algorithm on OTIS-Mesh of TreesIDES Editor
A parallel algorithm for prefix computation reported
recently on interconnection network called OTIS-Mesh Of
Trees[4]. Using n4 processors, algorithm shown to run in 13log
n + O(1) electronic moves and 2 optical moves for n4 data
points. In this paper we present new and improved parallel
algorithm for prefix on OTIS-Mesh of Trees. The algorithm
requires 10log n + O(1) electronic steps + 1 optical step for
prefix computation on the same number of processors and
data points as considered in [4].
Fuzzy Logic and Neuro-fuzzy Systems: A Systematic IntroductionWaqas Tariq
Fuzzy logic is a rigorous mathematical field, and it provides an effective vehicle for modeling the uncertainty in human reasoning. In fuzzy logic, the knowledge of experts is modeled by linguistic rules represented in the form of IF-THEN logic. Like neural network models such as the multilayer perceptron (MLP) and the radial basis function network (RBFN), some fuzzy inference systems (FISs) have the capability of universal approximation. Fuzzy logic can be used in most areas where neural networks are applicable. In this paper, we first give an introduction to fuzzy sets and logic. We then make a comparison between FISs and some neural network models. Rule extraction from trained neural networks or numerical data is then described. We finally introduce the synergy of neural and fuzzy systems, and describe some neuro-fuzzy models as well. Some circuits implementations of neuro-fuzzy systems are also introduced. Examples are given to illustrate the cocepts of neuro-fuzzy systems.
This document discusses unsupervised learning and clustering algorithms. It begins with an introduction to unsupervised learning, including motivations and differences from supervised learning. It then covers mixture density models, maximum likelihood estimation, and the k-means clustering algorithm. It discusses evaluating clustering using criterion functions and similarity measures. Specific topics covered include normal mixture models, EM algorithm, Euclidean distance, and hierarchical clustering.
This document discusses using the Levenberg-Marquardt algorithm for forecasting stock exchange share rates on the Karachi Stock Exchange. It provides an overview of artificial neural networks and how they can be used for financial forecasting applications. The Levenberg-Marquardt algorithm is presented as an efficient method for training neural networks to minimize errors through gradient descent. The document applies this method to train a neural network to predict the direction of change in share prices on the Karachi Stock Exchange. The network is trained on historical stock price data and testing shows it can achieve the performance goal of forecasting next day price changes.
In this work, the TREPAN algorithm is enhanced and extended for extracting decision trees from neural networks. We empirically evaluated the performance of the algorithm on a set of databases from real world events. This benchmark enhancement was achieved by adapting Single-test TREPAN and C4.5 decision tree induction algorithms to analyze the datasets. The models are then compared with X-TREPAN for
comprehensibility and classification accuracy. Furthermore, we validate the experimentations by applying statistical methods. Finally, the modified algorithm is extended to work with multi-class regression problems and the ability to comprehend generalized feed forward networks is achieved.
This document provides an introduction to deep learning. It discusses how deep learning uses multiple layers of nonlinear processing to automatically extract features from data, avoiding the need for manual feature engineering. Deep belief networks, which are composed of stacked restricted Boltzmann machines, are a widely used deep learning model. Training deep networks is challenging, but this is addressed by an unsupervised layer-wise pretraining approach followed by supervised fine-tuning of the entire network. The document reviews literature on deep learning models and applications.
TFFN: Two Hidden Layer Feed Forward Network using the randomness of Extreme L...Nimai Chand Das Adhikari
The learning speed of the feed forward neural
network takes a lot of time to be trained which is a major
drawback in their applications since the past decades. The
key reasons behind may be due to the slow gradient-based
learning algorithms which are extensively used to train the
neural networks or due to the parameters in the networks
which are tuned iteratively using some learning algorithms.
Thus, in order to eradicate the above pitfalls, a new learning
algorithm was proposed known as Extreme Learning Machines
(ELM). This algorithm tries to compute Hidden-layer-output
matrix that is made of randomly assigned input layer and
hidden layer weights and randomly assigned biases. Unlike the
other feedforward networks, ELM has the access of the whole
training dataset before going into the computation part. Here,
we have devised a new two-layer-feedforward network (TFFN)
for ELM in a new manner with randomly assigning the weights
and biases in both the hidden layers, which then calculates the
output-hidden layer weights using the Moore-Penrose generalized
inverse. TFFN doesn’t restricts the algorithm to fix the number
of hidden neurons that the algorithm should have. Rather it
searches the space which gives an optimized result in the neurons
combination in both the hidden layers. This algorithm provides a
good generalization capability than the parent Extreme Learning
Machines at an extremely fast learning speed. Here, we have
experimented the algorithm on various types of datasets and
various popular algorithm to find the performances and report
a comparison.
This document discusses neural networks and how they are used to solve classification problems. It covers the basics of multilayer perceptrons, how the weights are learned using an error-based learning rule called steepest descent, and how adding hidden layers allows neural networks to solve problems that single-layer perceptrons cannot, such as the XOR problem. It also discusses how the thresholds of units are treated as additional weights that are learned during training.
A fusion of soft expert set and matrix modelseSAT Journals
Abstract
The purpose of this paper is to define different types of matrices in the light of soft expert sets. We then propose a decision making
model based on soft expert set.
Keywords: Soft set, soft expert set, Soft Expert matrix.
GENERAL REGRESSION NEURAL NETWORK BASED POS TAGGING FOR NEPALI TEXTcscpconf
This article presents Part of Speech tagging for Nepali text using General Regression Neural
Network (GRNN). The corpus is divided into two parts viz. training and testing. The network is
trained and validated on both training and testing data. It is observed that 96.13% words are
correctly being tagged on training set whereas 74.38% words are tagged correctly on testing
data set using GRNN. The result is compared with the traditional Viterbi algorithm based on
Hidden Markov Model. Viterbi algorithm yields 97.2% and 40% classification accuracies on
training and testing data sets respectively. GRNN based POS Tagger is more consistent than the
traditional Viterbi decoding technique.
Composite Field Multiplier based on Look-Up Table for Elliptic Curve Cryptogr...Marisa Paryasto
This document discusses implementing elliptic curve cryptography using composite fields. It proposes using a 299-bit key represented in the composite field GF((213)23) instead of the conventional GF(2299). This breaks the finite field multiplication into smaller chunks by dividing the field into a ground field and extension field. A lookup table is used for multiplication in the ground field GF(213) while a classic multiplier is used for the extension field GF(23). This composite field approach aims to provide better time and area efficiency for implementation on FPGAs compared to a single large multiplier. The document provides background on elliptic curves, finite fields, and previous work on composite field representations.
A Learning Linguistic Teaching Control for a Multi-Area Electric Power SystemCSCJournals
This paper presents a new methodology for designing a neuro-fuzzy control for complex physical systems. By developing a Neural -Fuzzy system learning with linguistic teaching signals. The advantage of this technique is that, produce a simple and well-performing system because it selects the fuzzy sets and the numerical numbers and process both numerical and linguistic information. This approach is able to process and learn numerical information as well as linguistic information. The proposed control scheme is applied to a multi-area power system with hydraulic and thermal turbines.
This document discusses Bayesian decision theory and classifiers that use discriminant functions. It covers several key topics:
1. Classifiers can be represented by discriminant functions gi(x) that assign vectors x to classes based on their values. The functions divide the space into decision regions.
2. Discriminant functions gi(x) are not unique and can be scaled or shifted without changing decisions.
3. Examples of discriminant functions include posterior probabilities P(ωi | x), likelihood functions P(x | ωi)P(ωi), and risk functions.
4. The two-category case uses a single discriminant function g(x) = g1(x) - g2
Here is my class on the multilayer perceptron where I look at the following:
1.- The entire backproagation algorithm based in the gradient descent
However, I am planning the tanning based in Kalman filters.
2.- The use of matrix computations to simplify the implementations.
I hope you enjoy it.
This document contains lecture notes on sparse autoencoders. It begins with an introduction describing the limitations of supervised learning and the need for algorithms that can automatically learn feature representations from unlabeled data. The notes then state that sparse autoencoders are one approach to learn features from unlabeled data, and describe the organization of the rest of the notes. The notes will cover feedforward neural networks, backpropagation for supervised learning, autoencoders for unsupervised learning, and how sparse autoencoders are derived from these concepts.
This document summarizes a study on pattern recognition and learning in networks of coupled bistable units. The network is composed of N oscillators moving in a double-well potential, with pair-wise interactions between all elements. Two methods are used for training the network: (1) constructing the coupling matrix using Hebb's rule based on stored patterns, and (2) iteratively updating the matrix to minimize error between applied and desired patterns. Graphs show the learning rate converges as mean squared error and coupling strengths decrease over iterations.
The document discusses using clustering models like subtractive fuzzy clustering (SFC) and fuzzy c-means clustering (FCM) to generate an adaptive neuro-fuzzy inference system (ANFIS) for medical diagnoses. Experimental results on medical diagnosis datasets show that ANFIS models using SFC and FCM clustering (ANFIS-SFC and ANFIS-FCM) had better average training and checking errors compared to ANFIS without clustering. Specifically, ANFIS-SFC performed best using backpropagation learning, while ANFIS-FCM performed best using a hybrid learning model. Clustering the datasets without ANFIS was also able to identify different disease clusters.
This document discusses a fusion of soft expert set and matrix models. It begins by introducing soft sets, soft expert sets, fuzzy soft sets, and intuitionistic fuzzy soft sets. It then defines various types of matrices in the context of soft expert sets, including soft expert matrices, soft expert equal matrices, soft expert complement matrices, and operations on soft expert matrices like addition, subtraction, and multiplication. An example is provided to illustrate a soft expert matrix model for a manufacturing firm choosing a location based on expert opinions. The document aims to provide a new dimension to soft expert sets through the use of matrices to solve decision making problems.
The document discusses neural networks based on competition. It describes three fixed-weight competitive neural networks: Maxnet, Mexican Hat, and Hamming Net. Maxnet uses winner-take-all competition where only the neuron with the largest activation remains active. The Mexican Hat network enhances the activation of neurons receiving a stronger external signal by applying positive weights to nearby neurons and negative weights to those further away. An example demonstrates how the Mexican Hat network increases contrast over iterations.
The numerical solution of Huxley equation by the use of two finite difference methods is done. The first one is the explicit scheme and the second one is the Crank-Nicholson scheme. The comparison between the two methods showed that the explicit scheme is easier and has faster convergence while the Crank-Nicholson scheme is more accurate. In addition, the stability analysis using Fourier (von Neumann) method of two schemes is investigated. The resulting analysis showed that the first scheme
is conditionally stable if, r ≤ 2 − aβ∆t , ∆t ≤ 2(∆x)2 and the second
scheme is unconditionally stable.
Improved Parallel Prefix Algorithm on OTIS-Mesh of TreesIDES Editor
A parallel algorithm for prefix computation reported
recently on interconnection network called OTIS-Mesh Of
Trees[4]. Using n4 processors, algorithm shown to run in 13log
n + O(1) electronic moves and 2 optical moves for n4 data
points. In this paper we present new and improved parallel
algorithm for prefix on OTIS-Mesh of Trees. The algorithm
requires 10log n + O(1) electronic steps + 1 optical step for
prefix computation on the same number of processors and
data points as considered in [4].
Fuzzy Logic and Neuro-fuzzy Systems: A Systematic IntroductionWaqas Tariq
Fuzzy logic is a rigorous mathematical field, and it provides an effective vehicle for modeling the uncertainty in human reasoning. In fuzzy logic, the knowledge of experts is modeled by linguistic rules represented in the form of IF-THEN logic. Like neural network models such as the multilayer perceptron (MLP) and the radial basis function network (RBFN), some fuzzy inference systems (FISs) have the capability of universal approximation. Fuzzy logic can be used in most areas where neural networks are applicable. In this paper, we first give an introduction to fuzzy sets and logic. We then make a comparison between FISs and some neural network models. Rule extraction from trained neural networks or numerical data is then described. We finally introduce the synergy of neural and fuzzy systems, and describe some neuro-fuzzy models as well. Some circuits implementations of neuro-fuzzy systems are also introduced. Examples are given to illustrate the cocepts of neuro-fuzzy systems.
This document discusses unsupervised learning and clustering algorithms. It begins with an introduction to unsupervised learning, including motivations and differences from supervised learning. It then covers mixture density models, maximum likelihood estimation, and the k-means clustering algorithm. It discusses evaluating clustering using criterion functions and similarity measures. Specific topics covered include normal mixture models, EM algorithm, Euclidean distance, and hierarchical clustering.
This document discusses using the Levenberg-Marquardt algorithm for forecasting stock exchange share rates on the Karachi Stock Exchange. It provides an overview of artificial neural networks and how they can be used for financial forecasting applications. The Levenberg-Marquardt algorithm is presented as an efficient method for training neural networks to minimize errors through gradient descent. The document applies this method to train a neural network to predict the direction of change in share prices on the Karachi Stock Exchange. The network is trained on historical stock price data and testing shows it can achieve the performance goal of forecasting next day price changes.
In this work, the TREPAN algorithm is enhanced and extended for extracting decision trees from neural networks. We empirically evaluated the performance of the algorithm on a set of databases from real world events. This benchmark enhancement was achieved by adapting Single-test TREPAN and C4.5 decision tree induction algorithms to analyze the datasets. The models are then compared with X-TREPAN for
comprehensibility and classification accuracy. Furthermore, we validate the experimentations by applying statistical methods. Finally, the modified algorithm is extended to work with multi-class regression problems and the ability to comprehend generalized feed forward networks is achieved.
Incorporating Kalman Filter in the Optimization of Quantum Neural Network Par...Waqas Tariq
Kalman filter have been used for the estimation of instantaneous states of linear dynamic systems. It is a good tool for inferring of missing information from noisy measurement. The quantum neural network is another approach to the merging of fuzzy logic with the neural network and that by the investment of quantum mechanics theory in building the structure of neural network. The gradient descent algorithm has been used, widely, in training the neural network, but the problem of local minima is one of the disadvantages of this algorithm. This paper presents an algorithm to train the quantum neural network by using the extended kalman filter.
X-TREPAN: A MULTI CLASS REGRESSION AND ADAPTED EXTRACTION OF COMPREHENSIBLE D...cscpconf
In this work, the TREPAN algorithm is enhanced and extended for extracting decision trees from neural networks. We empirically evaluated the performance of the algorithm on a set of databases from real world events. This benchmark enhancement was achieved by adapting Single-test TREPAN and C4.5 decision tree induction algorithms to analyze the datasets. The models are then compared with X-TREPAN for comprehensibility and classification accuracy. Furthermore, we validate the experimentations by applying statistical methods. Finally, the modified algorithm is extended to work with multi-class regression problems and the ability to comprehend generalized feed forward networks is achieved.
X-TREPAN : A Multi Class Regression and Adapted Extraction of Comprehensible ...csandit
The document describes an algorithm called X-TREPAN that extracts decision trees from trained neural networks. X-TREPAN is an enhancement of the TREPAN algorithm that allows it to handle both multi-class classification and multi-class regression problems. It can also analyze generalized feed forward networks. The algorithm was tested on several real-world datasets and was found to generate decision trees with good classification accuracy while also maintaining comprehensibility.
This document describes a study that developed a neuro-fuzzy system for predicting electricity consumption. The neuro-fuzzy system combines the learning capabilities of neural networks with the linguistic rule interpretation of fuzzy inference systems. It was applied to predict future electricity consumption in Northern Cyprus based on past consumption data. The system was trained using a supervised learning algorithm to determine optimal parameters. Simulation results showed the neuro-fuzzy system achieved more accurate predictions of electricity consumption than a neural network model alone, using fewer training epochs.
Artificial neural networks are computer programs that can recognize patterns in data and produce models to represent that data. They are inspired by the human brain in how knowledge is acquired through learning and stored in the connections between neurons. Neural networks learn by adjusting the strengths of connections between neurons based on examples provided during training. They are able to model and learn both linear and nonlinear relationships in data.
This document discusses neural networks and their applications. It begins with an overview of neurons and the brain, then describes the basic components of neural networks including layers, nodes, weights, and learning algorithms. Examples are given of early neural network designs from the 1940s-1980s and their applications. The document also summarizes backpropagation learning in multi-layer networks and discusses common network architectures like perceptrons, Hopfield networks, and convolutional networks. In closing, it notes the strengths and limitations of neural networks along with domains where they have proven useful, such as recognition, control, prediction, and categorization tasks.
This document discusses neural networks and their learning capabilities. It describes how neural networks are composed of simple interconnected elements that can learn patterns from examples through training. Perceptrons are introduced as single-layer neural networks that can learn linearly separable functions through a simple learning rule. Multi-layer networks are shown to have greater learning capabilities than perceptrons using an algorithm called backpropagation that propagates errors backward through the network to update weights. Applications of neural networks include pattern recognition, control problems, and time series prediction tasks.
An Artificial Intelligence Approach to Ultra High Frequency Path Loss Modelli...ijtsrd
This study proposes Artificial Intelligence AI based path loss prediction models for the suburban areas of Abuja, Nigeria. The AI based models were created on the bases of two deep learning networks, namely the Adaptive Neuro Fuzzy Inference System ANFIS and the Generalized Radial Basis Function Neural network RBF NN . These prediction models were created, trained, validated and tested for path loss prediction using path loss data recorded at 1800MHz from multiple Base Transceiver Stations BTSs distributed across the areas under investigation. Results indicate that the ANFIS and RBF NN based models with Root Mean Squared Error RMSE values of 5.30dB and 5.31dB respectively, offer greater prediction accuracy over the widely used empirical COST 231 Hata, which has an RMSE of 8.18dB. Deme C. Abraham ""An Artificial Intelligence Approach to Ultra-High Frequency Path Loss Modelling of the Suburban Areas of Abuja, Nigeria"" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-2 , February 2020,
URL: https://www.ijtsrd.com/papers/ijtsrd30227.pdf
Paper Url : https://www.ijtsrd.com/computer-science/artificial-intelligence/30227/an-artificial-intelligence-approach-to-ultra-high-frequency-path-loss-modelling-of-the-suburban-areas-of-abuja-nigeria/deme-c-abraham
A Mixed Binary-Real NSGA II Algorithm Ensuring Both Accuracy and Interpretabi...IJECEIAES
In this work, a Neuro-Fuzzy Controller network, called NFC that implements a Mamdani fuzzy inference system is proposed. This network includes neurons able to perform fundamental fuzzy operations. Connections between neurons are weighted through binary and real weights. Then a mixed binaryreal Non dominated Sorting Genetic Algorithm II (NSGA II) is used to perform both accuracy and interpretability of the NFC by minimizing two objective functions; one objective relates to the number of rules, for compactness, while the second is the mean square error, for accuracy. In order to preserve interpretability of fuzzy rules during the optimization process, some constraints are imposed. The approach is tested on two control examples: a single input single output (SISO) system and a multivariable (MIMO) system.
Survey on Artificial Neural Network Learning Technique AlgorithmsIRJET Journal
This document discusses different types of learning algorithms used in artificial neural networks. It begins with an introduction to neural networks and their ability to learn from their environment through adjustments to synaptic weights. Four main learning algorithms are then described: error correction learning, which uses algorithms like backpropagation to minimize error; memory based learning, which stores all training examples and analyzes nearby examples to classify new inputs; Hebbian learning, where connection weights are adjusted based on the activity of neurons; and competitive learning, where neurons compete to respond to inputs to become specialized feature detectors through a winner-take-all mechanism. The document provides details on how each type of learning algorithm works.
This document describes research applying artificial neural networks to magnetotelluric data to determine subsurface layer structures. Key points:
- Researchers developed a three-layer neural network model trained with backpropagation to locate subsurface layers from magnetotelluric data. Resilient propagation training was found to be most effective.
- The network was trained on synthetic 1D magnetotelluric data for different layer resistivities and thicknesses, and tested on synthetic and real field data.
- Results showed the neural network approach produced fast, accurate, and objective estimates of subsurface resistivity and depth that correlated well with conventional serial algorithms. This validated neural networks as a useful tool for magnetotelluric inversion and
An artificial neural network (ANN) is the piece of a computing system designed to simulate the way the human brain analyzes and processes information. It is the foundation of artificial intelligence (AI) and solves problems that would prove impossible or difficult by human or statistical standards. ANNs have self-learning capabilities that enable them to produce better results as more data becomes available.
This work is proposed the feed forward neural network with symmetric table addition method to design the
neuron synapses algorithm of the sine function approximations, and according to the Taylor series
expansion. Matlab code and LabVIEW are used to build and create the neural network, which has been
designed and trained database set to improve its performance, and gets the best a global convergence with
small value of MSE errors and 97.22% accuracy.
Differential Protection of Generator by Using Neural Network, Fuzzy Neural an...Waqas Tariq
This document discusses three techniques for implementing differential protection of generators: neural networks, fuzzy neural networks, and fuzzy neural Petri nets. It provides an overview of each technique, including describing the basic structure and learning algorithms. The techniques are evaluated based on their ability to detect faults with higher sensitivity compared to conventional differential relay methods.
NETWORK LEARNING AND TRAINING OF A CASCADED LINK-BASED FEED FORWARD NEURAL NE...ijaia
Presently, considering the technological advancement of our modern world, we are in dire need for a system that can learn new concepts and give decisions on its own. Hence the Artificial Neural Network is all that is required in the contemporary situation. In this paper, CLBFFNN is presented as a special and intelligent form of artificial neural networks that has the capability to adapt to training and learning of new ideas and be able to give decisions in a trimodal biometric system involving fingerprints, face and iris biometric data. It gives an overview of neural networks.
Mobile Network Coverage Determination at 900MHz for Abuja Rural Areas using A...ijtsrd
This study proposes Artificial Neural Network ANN based field strength prediction models for the rural areas of Abuja, the federal capital territory of Nigeria. The ANN based models were created on bases of the Generalized Regression Neural network GRNN and the Multi Layer Perceptron Neural Network MLP NN . These networks were created, trained and tested for field strength prediction using received power data recorded at 900MHz from multiple Base Transceiver Stations BTSs distributed across the rural areas. Results indicate that the GRNN and MLP NN based models with Root Mean Squared Error RMSE values of 4.78dBm and 5.56dBm respectively, offer significant improvement over the empirical Hata Okumura counterpart, which overestimates the signal strength by an RMSE value of 20.17dBm. Deme C. Abraham ""Mobile Network Coverage Determination at 900MHz for Abuja Rural Areas using Artificial Neural Networks"" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-2 , February 2020,
URL: https://www.ijtsrd.com/papers/ijtsrd30228.pdf
Paper Url : https://www.ijtsrd.com/computer-science/artificial-intelligence/30228/mobile-network-coverage-determination-at-900mhz-for-abuja-rural-areas-using-artificial-neural-networks/deme-c-abraham
Optimization of Number of Neurons in the Hidden Layer in Feed Forward Neural ...IJERA Editor
The architectures of Artificial Neural Networks (ANN) are based on the problem domain and it is applied during
the „training phase‟ of sample data and used to infer results for the remaining data in the testing phase.
Normally, the architecture consist of three layers as input, hidden, output layers with the number of nodes in the
input layer as number of known values on hand and the number of nodes as result to be computed out of the
values of input nodes and hidden nodes as the output layer. The number of nodes in the hidden layer is
heuristically decided so that the optimum value is obtained with reasonable number of iterations with other
parameters with its default values. This study mainly focuses on Cascade-Correlation Neural Networks (CCNN)
using Back-Propagation (BP) algorithm which finds the number of neurons during the training phase itself by
appending one from the previous iteration satisfying the error condition gives a promising result on the optimum
number of neurons in the hidden layer
Similar to Approximate bounded-knowledge-extractionusing-type-i-fuzzy-logic (20)
This document summarizes a research paper that proposes using a Real-Coded Genetic Algorithm to design Unified Power Flow Controller (UPFC) damping controllers. The goal is to damp low frequency oscillations in power systems. The paper models a single-machine infinite-bus power system installed with a UPFC. It linearizes the system equations and formulates the controller design as an optimization problem to minimize oscillations. Simulation results comparing the proposed RCGA approach to conventional tuning are presented to demonstrate its effectiveness and robustness in damping power system oscillations.
The main-principles-of-text-to-speech-synthesis-systemCemal Ardil
This document discusses text-to-speech synthesis systems. It provides background on the history and development of such systems over three generations from 1962 to the present. It describes some of the main challenges in developing speech synthesis for different languages. The document then focuses on specifics of the Azerbaijani language and outlines the approach used in the text-to-speech synthesis system developed by the authors, which combines concatenative synthesis and formant synthesis methods.
The feedback-control-for-distributed-systemsCemal Ardil
The document summarizes a study on feedback control synthesis for distributed systems. The study proposes a zone control approach, where the state space is partitioned into zones defined by observable points. Control actions are piecewise constant functions that only change when the system transitions between zones. An optimization problem is formulated to determine the optimal constant control value for each zone. Gradient formulas are derived to solve this using numerical optimization methods. The zone control approach was tested on heat exchanger process control problems and showed more robust performance than alternative methods.
System overflow blocking-transients-for-queues-with-batch-arrivals-using-a-fa...Cemal Ardil
This document summarizes a research paper that analyzes the transient behavior of the overflow probability in a queuing system with fixed-size batch arrivals. It introduces a set of polynomials that generalize Chebyshev polynomials and can be used to assess the transient behavior. The key findings are:
1≤k ≤ B
k≥B+1
which is just the generating function of the Chebyshev
polynomials of the second kind.
Furthermore, if we consider the special case when B = 1 in
(9), and make the substitution x → 2x, we obtain
k =0
0≤k ≤ B
Pk −1
λ
μ
Sonic localization-cues-for-classrooms-a-structural-model-proposalCemal Ardil
The document describes a proposed structural model for sonic localization cues in classrooms. It discusses two primary cues for localization - interaural time difference (ITD) and interaural level difference (ILD) created by sounds reaching each ear. While these cues provide azimuth information, they do not provide elevation information. Elevation information is provided by spectral filtering effects of the head, torso and outer ears (pinnae) known as the head related transfer function (HRTF). The proposed structural model aims to produce well-controlled horizontal and vertical localization cues through a signal processing model of the HRTF that mimics how sounds interact with the body. The effectiveness of the model is tested through synthesized spatial audio experiments with human subjects
This document summarizes a new method for designing robust fuzzy observers for nonlinear systems based on Takagi-Sugeno fuzzy models. The method uses linear matrix inequalities to design observers that minimize the H-norm of the closed loop system, providing a measure of robustness and disturbance attenuation. The observer design method is similar to existing parallel distributed compensation controller design methods, making it possible to adapt controller design techniques for observer design. The observer estimates system states and outputs based on measured outputs and system inputs while attenuating effects of disturbances and uncertainties.
This document discusses evaluating the response quality of heterogeneous question answering systems. It begins by noting the lack of standard evaluation metrics for systems that use natural language understanding and reasoning to answer questions, as opposed to just information retrieval. It proposes a "black-box" approach to evaluate response quality by observing system responses, developing a classification scheme to categorize responses, and assigning scores. As a demonstration, it applies this approach to evaluate three example systems (AnswerBus, START, and NaLURI) on a set of questions about cyberlaw.
The document describes two methods for reducing the order of linear time-invariant systems: Routh approximation and particle swarm optimization (PSO). Routh approximation determines the denominator of the reduced order model using a Routh array, while retaining time moments or Markov parameters to determine the numerator. PSO reduces order by minimizing the integral squared error between responses of the original and reduced models, adjusting numerator and denominator coefficients. The methods are illustrated on examples, with Routh approximation providing stability guarantees when applied to stable systems.
Real coded-genetic-algorithm-for-robust-power-system-stabilizer-designCemal Ardil
This document summarizes a research paper that uses a real-coded genetic algorithm to optimize the design of power system stabilizers. The algorithm is applied to both single-machine and multi-machine power systems. The goal is to minimize rotor speed deviations and improve stability under disturbances. Simulation results show the proposed controller provides effective damping of low frequency oscillations across different operating conditions.
This summary provides the key points about the document in 3 sentences:
The document presents a method for obtaining the exact probability of error for block codes using soft-decision decoding and the eigenstructure of the code correlation matrix. It shows that under a unitary transformation, the performance evaluation of a block code becomes a one-dimensional problem involving only the dominant eigenvalue and its corresponding eigenvector. Simulation results demonstrate good agreement with the analysis, validating the method for computing the bit error rate of block codes based on the properties of the code correlation matrix.
This document presents an optimal supplementary damping controller design for Thyristor Controlled Series Compensator (TCSC) using Real-Coded Genetic Algorithm (RCGA). TCSC is capable of improving power system stability by modulating reactance during disturbances. The document proposes using a multi-objective fitness function consisting of damping factors and real parts of eigenvalues to optimize the parameters of a TCSC-based supplementary damping controller using RCGA. Simulation results presented show the effectiveness of the proposed controller over a wide range of operating conditions and disturbances.
This document presents a method for generating optimal straight line trajectories in 3D space using an algorithm called the Bounded Deviation Algorithm (BDA). BDA approximates a straight line trajectory between two points by iteratively inserting knot points to minimize the deviation between the actual trajectory and the joint space trajectory. The document provides the mathematical formulation and simulation results of applying BDA to generate a straight line trajectory for a 5-axis articulated robot between two specified points.
On the-optimal-number-of-smart-dust-particlesCemal Ardil
This document discusses optimizing the number of smart dust particles used to generate weather maps. It addresses two main challenges: 1) how to match signals from smart dust particles to receivers given atmospheric constraints, and 2) what is the optimal number of particles needed to generate precise and cost-effective 3D maps. The document presents an algorithm to optimally match particles to receivers in O(n*m) time by framing it as a maximal bipartite graph matching problem. It also develops mathematics to prove a conjecture that the optimal number of particles is approximately 1/ε, where ε is the drift error.
On the-joint-optimization-of-performance-and-power-consumption-in-data-centersCemal Ardil
The document summarizes research on jointly optimizing performance and power consumption in data centers. It models the process of mapping tasks in a data center onto machines as a multi-objective problem to minimize both energy consumption and response time (makespan), subject to deadline and architectural constraints. It proposes using a simple goal programming technique that guarantees Pareto optimal solutions with good convergence. Simulation results show the technique achieves superior performance compared to other approaches and is competitive with optimal solutions for small-scale problems.
On the-approximate-solution-of-a-nonlinear-singular-integral-equationCemal Ardil
This document summarizes a study on finding approximate solutions to nonlinear singular integral equations. The study proves the existence and uniqueness of solutions to such equations defined on bounded regions of the complex plane. It then presents a method for finding approximate solutions using an iterative fixed-point principle approach. Nonlinear singular integral equations have many applications in fields like elasticity, fluid mechanics, and mathematical physics. The study contributes to improving methods for solving these important types of equations.
On problem-of-parameters-identification-of-dynamic-objectCemal Ardil
This document discusses methods for identifying parameters of dynamic objects described by systems of ordinary differential equations. Specifically, it addresses problems with multiple initial boundary conditions that are not shared across points.
The paper proposes a new "conditions shift" method to transfer the initial boundary conditions in a way that eliminates differential links and multipoint conditions. This reduces the parameter identification problem to solving either an algebraic system or a quadratic programming problem.
Two cases are considered: case A where the number of conditions equals the number of conditionally free parameters, resulting in a single parameter vector solution. Case B where additional conditions on the parameters are needed in the form of equalities or inequalities, resulting in an optimization problem to select optimal parameter values.
This document summarizes a research article about numerical modeling of gas turbine engines. The researchers developed mathematical models and numerical methods to calculate the stationary and quasi-stationary temperature fields of gas turbine blades with convective cooling. They combined the boundary integral equation method and finite difference method to solve this problem. The researchers proved the validity of these methods through theorems and estimates. They were able to visualize the temperature profiles using methods like least squares fitting with automatic interpolation, spline smoothing, and neural networks. The reliability of the numerical methods was confirmed through calculations and experimental tests of heat transfer characteristics on gas turbine nozzle blades.
New technologies-for-modeling-of-gas-turbine-cooled-bladesCemal Ardil
The document describes new technologies for modeling gas turbine cooled blades, including:
1) Developing mathematical models and numerical methods using the boundary integral equation method (BIEM) and finite difference method (FDM) to calculate the stationary and quasi-stationary temperature field of a blade profile with convective cooling.
2) Using splines, smooth interpolation, and neural networks for visualization of blade profiles.
3) Validating the designed methods through computational and experimental investigations of heat and hydraulic characteristics of a gas turbine nozzle blade.
This document discusses using neuro-fuzzy networks to identify parameters for mathematical models of geofields. It proposes a new technique using fuzzy neural networks that can be applied even when data is limited and uncertain in the early stages of modeling. A numerical example is provided to demonstrate the identification of parameters for a regression equation model of a geofield using a fuzzy neural network structure. The network is trained on experimental fuzzy statistical data to determine values for the regression coefficients that satisfy the data. The technique is concluded to have advantages over traditional statistical methods as it can be applied regardless of the parameter distribution and is well-suited for cases with insufficient data in early modeling stages.
This document presents a new multivariate fuzzy time series forecasting method to predict car road accidents. The method uses four secondary factors (number killed, mortally wounded, died 30 days after accident, severely wounded, and lightly casualties) along with the main factor of total annual car accidents in Belgium from 1974 to 2004. The new method establishes fuzzy logical relationships between the factors to generate forecasts. Experimental results show the proposed method performs better than existing fuzzy time series forecasting approaches at predicting car accidents. Actuaries can use this kind of multivariate fuzzy time series analysis to help define insurance premiums and underwriting.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
20 Comprehensive Checklist of Designing and Developing a WebsitePixlogix Infotech
Dive into the world of Website Designing and Developing with Pixlogix! Looking to create a stunning online presence? Look no further! Our comprehensive checklist covers everything you need to know to craft a website that stands out. From user-friendly design to seamless functionality, we've got you covered. Don't miss out on this invaluable resource! Check out our checklist now at Pixlogix and start your journey towards a captivating online presence today.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
1. World Academy of Science, Engineering and Technology
International Journal of Computer, Information Science and Engineering Vol:1 No:7, 2007
Approximate Bounded Knowledge Extraction
using Type-I Fuzzy Logic
Syed Muhammad Aqil Burney, Senior Member IEEE, Tahseen Ahmed Jilani, Member IEEE,
Cemal Ardil
2)
International Science Index 7, 2007 waset.org/publications/4845
Abstract—Using neural network we try to model the unknown
function f for given input-output data pairs. The connection strength
of each neuron is updated through learning. Repeated simulations of
crisp neural network produce different values of weight factors that
are directly affected by the change of different parameters. We
propose the idea that for each neuron in the network, we can obtain
quasi-fuzzy weight sets (QFWS) using repeated simulation of the
crisp neural network. Such type of fuzzy weight functions may be
applied where we have multivariate crisp input that needs to be
adjusted after iterative learning, like claim amount distribution
analysis. As real data is subjected to noise and uncertainty, therefore,
QFWS may be helpful in the simplification of such complex
problems. Secondly, these QFWS provide good initial solution for
training of fuzzy neural networks with reduced computational
complexity.
Keywords—Crisp neural networks, fuzzy systems, extraction of
logical rules, quasi-fuzzy numbers.
F
I. INTRODUCTION
ISSION of artificial neural networks and fuzzy inference
systems have attracted the growing interest of researchers
in various scientific and engineering areas due to the growing
need of adaptive intelligent systems to solve the real world
problems. A crisp or fuzzified neural network can be viewed
as a mathematical model for brain-like systems. The learning
process increases the sum of knowledge of the neural network
by improving the configuration of weight factors. Fuzzy
neural networks are generalization of crisp neural networks to
process both numerical information from measuring
instruments and linguistic information from human experts,
see [2], [14], and [15]. Thus, fuzzy inference systems can be
used to emulate human expert knowledge and experience. An
overview of different fuzzy neural network architectures is
discussed by [5], [7] and classified them as,
1)
A fuzzy neural network may take crisp or fuzzy values
as inputs and can return crisp or fuzzy output.
Manuscript received July, 13. 2005.
S. M. Aqil Burney is Professor in the Department of Computer Science,
University of Karachi, Pakistan (phone: 0092-21-9243131 ext. 2447, fax:
0092-21-9243203, e-mail: Burney@computer.org, aqil_burney@yahoo.com).
Tahseen Ahmed Jilani is lecturer and Research fellow in the Department of
Computer
Science,
University
of
Karachi,
Pakistan
(e-mail:
tahseenjilani@ieee.org).
Cemal Ardil is with National Academy of Aviation, Baku, Azerbaijan (email: cemalardil@gmail.com ).
another class of fuzzy neural networks is feedforward
neural networks which are defined from conventional
feedforward neural networks by substituting fuzzified
neurons for crisp ones. These are named as regular fuzzy
neural networks.
It is much more difficult to develop the learning algorithms
for the fuzzy neural networks than for the crisp neural
networks; this is because the inputs, connections weights and
bias terms related to a regular fuzzy neural network are fuzzy
sets, see [17], [22] and [24].
The paper is organized as follows. In section II, we made a
short study of learning procedures in crisp neural networks. In
section III, we present concepts of fuzzy logic and quasi-fuzzy
sets. In section IV, simulation experiments using crisp neural
network is performed repeatedly to achieve quasi-fuzzy sets.
These sets provide the initial solution for type-I neuro-fuzzy
networks as discussed by [9], [28] and [29]. To our
knowledge, the concept of obtaining fuzzy weights through
crisp neural network has not been investigated in the
literature.
II. NEURAL NETWORKS
Using neural network we try to model the unknown
function f for given input-output data pairs. The existing
algorithms for these problems are regression modeling, neural
networks, and wavelet theory. A neural network can be
regarded as representation of a function determined by its
weight factors and networks architecture [15]. The overall
mapping is thus characterized by a composite function relating
feedforward network inputs to output. That is
O = f composite (x )
Using p-mapping layers in a p+1 layer feedforward net yield
O =f
Lp
(f
L p −1
( (x ).......) ) .
.... f
L1
Usually, we train a neural network with a training set, present
inputs to the neural networks, and interpret the outputs
according to the logical rules in the training set see [1], [3],[4]
and [21]. The most commonly used technique to adjust weight
parameters of a neural network is backpropagation method
based on LMS learning defined as
⎡
⎤
J = E ⎢ ∑ e k2 (n )⎥
⎣ k
⎦
2203
2. World Academy of Science, Engineering and Technology
International Journal of Computer, Information Science and Engineering Vol:1 No:7, 2007
where k= number of output neurons.
w lji (n + 1) = w lji (n ) + η δ lj (n ) y il −1 (n ) ,
η
is the learning rate and
δ lj (n )
is the local change made at
each neuron in the learning, see [15]
(
)
⎧ e L (n )Φ ' j υ L (n )
j
j
⎪
( for neuron
j in output layer L ) .
⎪
⎪
δ lj (n ) = ⎨ '
l
l
Φ j υ j (n ) ∑ δ kl + 1 (n ) w kj+ 1 (n )
⎪
k
⎪
⎪ ( for neuron
j in output layer l )
⎩
(
)
But to deal with noisy and uncertain information, a crisp
neural network has to use concepts of fuzzy interference
systems [27].
reverse problems: given the input-output behavior of a system,
what are the rules which are governing the behavior.
We cite definitions of fuzzy set and membership function
cross over points, alpha-cut sets and convexity of a fuzzy set
see [10].
Definition 1: If X is a collection of objects denoted
generically by x, then a fuzzy set A is defined as a set of
ordered pairs,
(
A = { x , µ A ( x )), x ∈ X }
,
Where µ A ( x ) is called the membership function for the fuzzy
set A. The membership function maps each element of X to a
membership grade value between 0 and 1.
Definition 2: The
α − cut
or
α - level set is a non-fuzzy set
α
of a fuzzy set A denoted by A and defined as
(1)
(2)
W1, 1
W1, 1
A α = { x µ A (x ) ≥ α }
(1)
International Science Index 7, 2007 waset.org/publications/4845
W2, 1
(1)
(2)
W1, 2
(1)
W3, 1
(1)
W1, 2
(1)
W2, 2
(2)
W1, 3
Thus every fuzzy set can be represented as a set of it’s
α − cuts as
{
A = A α 1 , A α 2 ,...., A α m
}
(2)
(1)
•
•
W2, 3
(1)
W1, 3
(1)
W3, 3
Definition 3: A fuzzy set is convex if and only if for any
x1 , x 2 ∈ X and any λ ∈ [0,1]
µ A (λ x 1 + (1 − λ )x 2 ) ≥ min (µ A ( x 1 ) + µ A ( x 2 ))
•
Alternatively, A is convex if all of its
convex.
Fig. 1 Structure of a crisp artificial neural network
sets are
Definition 4: A quasi-fuzzy number A is a fuzzy set of the real
line with a normal, fuzzy convex and continuous membership
function satisfying the following conditions,
lim( t → −∞ ) A (t ) = 0 , lim( t → ∞ ) A (t ) = 0
III. FUZZY LOGIC
Fuzzy logic was originally proposed by Prof. Lotfi A.
Zadeh to quantitatively and effectively handle problems
involving uncertainty; ambiguity and vagueness see [12] and
[13]. The theory which is now well-established was
specifically designed to mathematically represent uncertainty
and vagueness and provide formalized tools for dealing with
the imprecision that is intrinsic to many real world problems.
The ability of fuzzy logic is inherently robust since it does not
require precision and noise-free inputs. Fuzzy inference
systems are the most reliable alternative if the mathematical
model of the system to be controlled is unavailable [11],[18]
and [26]. The fuzzy sets and fuzzy rules can be formulated in
terms of linguistic variables. Methods of fuzzy logic are
commonly used to model a complex system by a set of rules
provided by the experts. But fuzzy rules can also be applied in
α − cut
(3)
γ
Let A be a fuzzy number. Then A is a closed convex subset
of R for all γ ∈[0,1] defined as
( )
( )
γ
a l (γ ) = min A γ , a r (γ ) = max A
a l : [0 ,1]→ R a r : [0 ,1]→ R
(4)
Then A γ = [a l (γ ), a r (γ )] . The support of A is the open
interval (a l (γ ), a r (γ )) .
Definition 5: A triangular membership function is specified by
three parameters (a m , a l , a r ) as follows:
2204
3. World Academy of Science, Engineering and Technology
International Journal of Computer, Information Science and Engineering Vol:1 No:7, 2007
⎛
⎛ x − al
ar − x
trian (x; a l , a m , a r ) = max ⎜ min ⎜
⎜a −a , a −a
⎜
l
r
m
⎝ m
⎝
al
am
⎞ ⎞ (5)
⎟ , 0⎟
⎟ ⎟
⎠ ⎠
ar
International Science Index 7, 2007 waset.org/publications/4845
Fig. 2 Quasi triangular fuzzy set
IV. EXPERIMENT
In this paper we demonstrate the learning and obtaining
fuzzy membership functions of weight vectors to obtain quasifuzzy weight sets. The input/target pair presented to the
network is {X, t} where X = [x1 , x 2 , x 3 , x 4 , x 5 ] . A crisp
neural network with 3 hidden and one output neuron is trained
with performance function 1e-06 and repeated the simulation
for first 100 successes.
Daily close share prices are considered from Karachi stock
exchange for 200 trading days and are preprocessed. For each
of the hidden neuron and output neuron, the simulated weight
values may be plotted.
The QFWS of first input connected to all the three neuron are
shown in figure 4. The triangular membership is constructed
1
due to its reduced complexity [8] and [19] and [20]. For w1,1
as shown in fig. 3, using (5) the parameters of triangular-mf
will be,
1
1
al = min(w1,1 ) a r = max(w1,1 ) a m = (al + a r )
2
In order to reduce computational expense, we use triangular
~
fuzzy numbers a = (a m , a l , a r )trian to define the fuzzy weight.
These quasi-fuzzy weights sets follow fuzzy arithmetic, and
thus can be used for fuzzy neural networks.
(6)
Secondly, [15] defines that each hidden weight connection of
neuron lies approximately in the interval
Fig. 3 (a) Input weight matrix for first input vector, (b) Triangular-mf for first weight matrix w i ,1 , i = 1,2,3 (no. of neurons )
2205
4. World Academy of Science, Engineering and Technology
International Journal of Computer, Information Science and Engineering Vol:1 No:7, 2007
(7)
[2]
Our proposed interval based weight set in eq. (6) provides
little large interval to search for weights of hidden part of a
fuzzy neural network.
[3]
1
−
nh
< w ij <
1
nh
[4]
[5]
(1)
[6]
W1, 1
(2)
W1, 1
[7]
(1)
W2, 1
(2)
W1, 2
•
International Science Index 7, 2007 waset.org/publications/4845
•
(2)
[8]
[9]
W1, 3
(1)
W3, 1
•
[10]
[11]
[12]
Fig. 4 Proposed quasi-fuzzy weight neural network
V. CONCLUSION
We described the architecture of QFWS based fuzzified
neural networks and presented a general framework of
learning algorithms of fuzzified neural networks. Learning in
neuro-fuzzy learning with fuzzy weights requires initialization
of an interval based fuzzy sets, which require higher
computing than for crisp learning to deal with uncertainty,
vagueness and linguistic behaviors of some real life situations
see [6], [16], [23] and [25].
Further improved identification of suitable membership
functions is possible by determining the underlying
probability structure of synaptic connections of a crisp neural
network. Thus based on this idea, we can form fuzzy inference
systems with varying rules. This may provide new research
directions to compare different QFWS based fuzzy neural
networks.
ACKNOWLEDGEMENT
Very thankful to Mr. M. Najam-ul-Hasnain of Department
of Computer Science, University of Karachi for his computing
support.
The authors would like to thank the referees for their
helpful suggestions and comments.
REFERENCES
[1]
Aqil Burney S.M., Jilani A. Tahseen and Cemal Ardil, “A comparative
study of first and second order training algorithms for artificial neural
networks”, Int. Journal of Computational Intelligence, vol. 1, no.3,
2004, pp. 218-224.
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
2206
Aqil Burney S.M., Jilani A. Tahseen and Cemal Ardil, “LevenbergMarquardt algorithm for Karachi Stock Exchange share rates
forecasting”, Int. J. of Computational Intelligence, vol. 1, no. 2, 2004, pp
168-173.
Aqil Burney S.M., Jilani A. Tahseen, “Time Series forecasting using
artificial neural network methods for Karachi Stock Exchange”, A
Project in the Dept. of Computer Science, University of Karachi. 2002.
F. Scarselli and A. C. Tosi, “Universal approximation using feedforward
neural networks: A survey of some existing methods, and some new
results,” Neural Networks, vol. 11, no. 1, 1998, pp. 15-37.
G. Castellano and A.M. Fanelli, “Fuzzy inference and rule extraction
using a neural network", Neural Network World Journal, vol. 3, 2000,
pp. 361-371.
H. Ishibuchi and M. Nii, “Numerical analysis of the learning of fuzzified
neural networks from if-then rules,” Fuzzy Sets Syst., vol. 120, no. 2,
2001, pp. 281-307.
H. Ishibuchi, Fujioka, and Tanaka, (1993), “Neural networks that learn
from fuzzy If-then rules”, IEEE Transactions on Fuzzy Systems, vol. 1.
no. 2. 1993.
J. Dunyak and D. Wunsch, “Training fuzzy numbers neural networks
with alpha-cut refinement,” in Proc. IEEE Int. Conf. System, Man,
Cybernetics, vol. 1, 1997, pp. 189-194.
J. J. Buckley and Y. Hayashi, “Neural networks for fuzzy systems,”
Fuzzy Sets and Systems 1995, pp. 265-276.
Jang, Sun and Mizutani, Neuro-fuzzy logic and Soft Computing; A
computational approach to learning and machine intelligence. New
York: Practice-Hall, 2003, Chap. 2-4
Jerry M. Mendel, Uncertainly Rule-Based Fuzzy Logic Systems.
Introduction and new Directions. New York: Prentice Hall PTR,
NJ.2001, chapter 1-7.
L. A. Zadeh, “The concept of linguistic variable and its applications to
approximate reasoning”, Parts I,II,III, Information Sciences, 8(1975)
199-251; 8(1975) 301-357; 9(1975) 43-80. 30.
L. A. Zadeh, “Outline of a new approach to the analysis of complex
systems and decision processes”, IEEE Trans. Systems, Man and
Cybernetics, 1973, vol. 3, pp. 28-44.
Mir F. Atiya, Suzan M. El-Shoura, Samir I. Shaken, “A comparison
between neural network forecasting techniques- case study: river flow
forecasting". IEEE Trans. on Neural Networks. Vol. 10, No. 2. 1999.
M. Bishop, Neural networks for pattern recognition. United Kingdom:
Clarendon Press, 1995, chapter 5-7.
Nauck and R. Kruse, “Designing neuro-fuzzy systems through
backpropagation”, in Fuzzy Modeling: Paradigms and Practice, Kluwer,
Boston, 1996. pp. 203-228.
Nauck, Detlef and Kruse, Rudolf, “Designing neuro-fuzzy systems
through backpropagation”, In Witold Pedryz, editor, Fuzzy Modeling:
Paradigms and Practice, 1996. pp. 203-228, Kluwer, Boston.
Nilesh N. Karnik, Jerry M. Mendel and Qilian Liang, “Type-2 Fuzzy
Logic Systems”, IEEE Trans. Fuzzy Syst., 1999, vol. 15, no. 3,pp. 643658.
P. Eklund, J. Forsstrom, A. Holm, M. Nystrom, and G. Selen, “Rule
generation as an alternative to knowledge acquisition: A systems
architecture for medical informatics”, Fuzzy Sets and Systems, vol. 66
1994, pp. 195-205.
P. Eklund, “Network size versus preprocessing, Fuzzy Sets, Neural
Networks and Soft Computing” (Van Nostrand, New York, 1994, pp.
250-264.
Puha, P. K. H. Daohua Ming, “Parallel nonlinear optimization
techniques for training neural networks.”, IEEE Trans. on Neural
Networks, vol. 14, no. 6, 2003, pp 1460-1468.
Puyin Liu and Hongxing, ”Efficient learning algorithms for three-layer
regular feedforward fuzzy neural networks”, IEEE Trans. Fuzzy Syst.,
vol. 15, no. 3, 2004, pp. 545-558.
S. Mitra and Y. Hayashi, “Neuro-fuzzy rule generation: Survey in soft
computing framework,” IEEE Trans. Neural Networks., vol. 11, no. 3,
2000, pp. 748–768.
S. M. Chen, “A weighted fuzzy reasoning algorithm for medical
diagnosis”, Decision Support Systems, vol. 11, 1994, pp.37-43.
Sungwoo Park and Taisook Han, “Iterative Inversion of Fuzzified
Neural Networks”, IEEE Trans. Fuzzy Syst., vol. 8, no. 3, 2000, pp. 266280.
5. World Academy of Science, Engineering and Technology
International Journal of Computer, Information Science and Engineering Vol:1 No:7, 2007
International Science Index 7, 2007 waset.org/publications/4845
[26] T. Takagi and M. Sugeno, “Fuzzy identification of systems and its
applications to modeling and control”, IEEE Trans. Syst. Man Cybernet.,
1985, pp. 116-132.
[27] Włodzisław Duch,” Uncertainty of Data, Fuzzy Membership Functions,
and Multilayer Perceptrons”, IEEE, Trans. on Neural Network, vol. 16,
no.1, 2005.
[28] Xinghu Zhang, Chang-Chieh Hang, Shaohua Tan and -Pei Zhuang
Wang,” The Min-Max Function Differentiation and Training of Fuzzy
Neural Networks”, IEEE Trans, Neural Networks, vol. 7. no. 5, 1996,
pp. 1139-1149.
[29] Y. Hayashi, J. J. Buckley, and E. Czogala, “Fuzzy neural network with
fuzzy signals and weight, “in Proc. Int. Joint Conf. Neural Networks,
vol. 2, Baltimore, MD, pp. 1992, pp. 696-701.
S. M. Ail Burney received the B.Sc, first class first
M.Sc. M.Phil. from Karachi University in 1970, 1972
and 1983 respectively. He received Ph.D. degree in
Mathematics from Strathclyde University, Glasgow
with specialization in estimation, modeling and
simulation of multivariate Time series models using
algorithmic approach with software development.
He is currently professor and approved supervisor in
Computer Science and Statistics by the High
Education Commission, Government of Pakistan. He
is also member of various higher academic boards of different universities of
Pakistan. His research interest includes AI, soft computing, neural networks,
fuzzy logic, data mining, statistics, simulation and stochastic modeling of
mobile communication system and networks and network security. He is
author of three books, various technical reports and supervised more than 100
software/Information technology projects of Masters level degree programs
and project director of various projects funded by Government of Pakistan.
He is member of IEEE (USA), ACM (USA) and fellow of Royal Statistical
Society United Kingdom and also a member of Islamic society of Statistical
Sciences. He is teaching since 1973 in various universities in the field of
Econometric, Bio-Statistics, Statistics, Mathematic and Computer Science He
has vast education management experience at the University level. Dr. Burney
has received appreciations and awards for his research and as educationist
including NCR-2002 award for Best National Educationist.
Tahseen A. Jilani received the B.Sc., first class
second M.Sc. (Statistics) and M.A. (Economics) from
Karachi University in 1998, 2001 and 2003
respectively. Since 2003, he is Ph.D. research fellow
in the Department of Computer Science, University
of Karachi.
He is member of IEEE-Computational Intelligence
Society. His research interest includes AI, neural
networks, soft computing, fuzzy logic, Statistical data
mining and simulation. He is teaching since 2002 in
the fields of Statistics, Mathematic and Computer Science.
2207