Paper presented at the IEEE Symposium on Computational Intelligence for Creativity and Affective Computing (CICAC 2013), held as a part of the IEEE Symposium Series on Computational Intelligence (SSCI 2013), Singapore, 15-19 April 2013
CupCarbon simulator: Simulating the D-LPCN algorithm to find the boundary nodes of a WSN by Ahcene Bounceur, University of Bretagne Occidentale, Brest, France
Student intervention detection using deep learning techniqueVenkat Projects
The document discusses using an artificial neural network (ANN) and deep learning techniques to detect issues in student performance data. It provides background on ANNs, describing their three-layer structure and learning abilities. It then details using Python packages like Keras, TensorFlow and scikit-learn to build and train an ANN model on a student dataset to perform student intervention detection. Screenshots are included showing the uploading of data and exploration, preprocessing, model generation and results.
Application of support vector machines for prediction of anti hiv activity of...Alexander Decker
This document describes a study that used support vector machines (SVM) to develop a quantitative structure-activity relationship (QSAR) model to predict the anti-HIV activity of TIBO derivatives. The SVM model achieved high correlation (q2=0.96) and low error (RMSE=0.212), outperforming artificial neural networks and multiple linear regression models developed on the same data set. The results indicate that SVM is a valuable tool for QSAR modeling and predicting anti-HIV activity of chemical compounds.
Resume of Masamichi Takagi on Jul 19, 2010masataka2
Masamichi Takagi is seeking a position in software research, hardware research, or product management. He has 6 years of experience in processor architecture, parallel processing, compilers, and communication middleware. His education includes a Ph.D. in Computer Science from the University of Tokyo and he has published papers on topics including parallelization, speculative multi-threading, and cache algorithms.
Coding the Matrix: Linear Algebra through Computer Science ApplicationsVassilios Rendoumis
In this class, you have learned key concepts and methods of linear algebra, using them to think about problems in computer science. You have implemented basic matrix and vector functionality and algorithms, and used them to process real-world data.
One page summary of master thesis "Mathematical Analysis of Neural Networks"Alina Leidinger
This is a one page summary of my master thesis which I handed in on June 15, 2019 at TUM. The thesis takes the form of a literature review on the existing rigorous analysis on neural networks. It focuses on 3 key aspects: modern and classical results in approximation theory, robustness of neural networks and unique identification of neural network weights. The thesis was supervised by Prof. Dr. Massimo Fornasier at the Chair of Applied Numerical Analysis of the Mathematics Department at TUM.
Pattern recognition system based on support vector machinesAlexander Decker
This document describes a study that uses support vector machines (SVM) to develop quantitative structure-activity relationship (QSAR) models for predicting the anti-HIV activity of 1,3,4-oxadiazole substituted naphthyridine derivatives based on their molecular descriptors. The SVM model achieved a cross-validation R2 value of 0.90 and RMSE of 0.145, outperforming artificial neural network and multiple linear regression models. An external validation on an independent test set found the SVM model had an R value of 0.96 and RMSE of 0.166, demonstrating good predictive ability.
Jeremy Hadidjojo is a PhD candidate in physics at the University of Michigan with expertise in computational physics, mathematical modeling, simulation, and data analysis. His research focuses on developing physical models of biological pattern formation and applying machine learning techniques to analyze complex systems. He has extensive programming skills in MATLAB, Python, C++ and experience with parallel and GPU computing. His published works include modeling mechanisms of planar cell chirality and retinal cone patterning in zebrafish.
CupCarbon simulator: Simulating the D-LPCN algorithm to find the boundary nodes of a WSN by Ahcene Bounceur, University of Bretagne Occidentale, Brest, France
Student intervention detection using deep learning techniqueVenkat Projects
The document discusses using an artificial neural network (ANN) and deep learning techniques to detect issues in student performance data. It provides background on ANNs, describing their three-layer structure and learning abilities. It then details using Python packages like Keras, TensorFlow and scikit-learn to build and train an ANN model on a student dataset to perform student intervention detection. Screenshots are included showing the uploading of data and exploration, preprocessing, model generation and results.
Application of support vector machines for prediction of anti hiv activity of...Alexander Decker
This document describes a study that used support vector machines (SVM) to develop a quantitative structure-activity relationship (QSAR) model to predict the anti-HIV activity of TIBO derivatives. The SVM model achieved high correlation (q2=0.96) and low error (RMSE=0.212), outperforming artificial neural networks and multiple linear regression models developed on the same data set. The results indicate that SVM is a valuable tool for QSAR modeling and predicting anti-HIV activity of chemical compounds.
Resume of Masamichi Takagi on Jul 19, 2010masataka2
Masamichi Takagi is seeking a position in software research, hardware research, or product management. He has 6 years of experience in processor architecture, parallel processing, compilers, and communication middleware. His education includes a Ph.D. in Computer Science from the University of Tokyo and he has published papers on topics including parallelization, speculative multi-threading, and cache algorithms.
Coding the Matrix: Linear Algebra through Computer Science ApplicationsVassilios Rendoumis
In this class, you have learned key concepts and methods of linear algebra, using them to think about problems in computer science. You have implemented basic matrix and vector functionality and algorithms, and used them to process real-world data.
One page summary of master thesis "Mathematical Analysis of Neural Networks"Alina Leidinger
This is a one page summary of my master thesis which I handed in on June 15, 2019 at TUM. The thesis takes the form of a literature review on the existing rigorous analysis on neural networks. It focuses on 3 key aspects: modern and classical results in approximation theory, robustness of neural networks and unique identification of neural network weights. The thesis was supervised by Prof. Dr. Massimo Fornasier at the Chair of Applied Numerical Analysis of the Mathematics Department at TUM.
Pattern recognition system based on support vector machinesAlexander Decker
This document describes a study that uses support vector machines (SVM) to develop quantitative structure-activity relationship (QSAR) models for predicting the anti-HIV activity of 1,3,4-oxadiazole substituted naphthyridine derivatives based on their molecular descriptors. The SVM model achieved a cross-validation R2 value of 0.90 and RMSE of 0.145, outperforming artificial neural network and multiple linear regression models. An external validation on an independent test set found the SVM model had an R value of 0.96 and RMSE of 0.166, demonstrating good predictive ability.
Jeremy Hadidjojo is a PhD candidate in physics at the University of Michigan with expertise in computational physics, mathematical modeling, simulation, and data analysis. His research focuses on developing physical models of biological pattern formation and applying machine learning techniques to analyze complex systems. He has extensive programming skills in MATLAB, Python, C++ and experience with parallel and GPU computing. His published works include modeling mechanisms of planar cell chirality and retinal cone patterning in zebrafish.
Using Neural Networks and 3D sensors data to model LIBRAS gestures recognitio...Gabriel Moreira
Paper entitled "Using Neural Networks and 3D sensors data to model LIBRAS gestures recognition", presented at II Symposium on Knowledge Discovery, Mining and Learning – KDMILE, USP, São Carlos, SP, Brazil.
20090219 The case for another systems biology modelling environmentJonathan Blakes
The document discusses the need for a new systems biology modelling environment. It provides context on systems biology and existing modelling approaches and software. It then makes the case that a new modelling environment could improve the user experience for biologists by making models easier to build and refine while allowing for more complex models at larger scales. Key details on existing challenges and the proposed new environment are outlined.
This document discusses computational methods for identifying metabolites from tandem mass spectrometry data. It begins with background on metabolites and challenges in identification. Common approaches are described, including mass spectra libraries, in silico fragmentation using rules or machine learning, and machine learning methods. Recent machine learning works are summarized, such as using kernels to model peak interactions, unsupervised methods to group metabolites by shared substructures, and automatically recommending substructures from mass spectra. The document concludes that metabolite identification is important for metabolomics and machine learning is key to recent advances.
The document discusses improving neural network classification of astronomical objects into stars and galaxies. It analyzes the classifier used in the SExtractor software, which uses a multi-layer perceptron neural network trained on simulated data. The authors build their own classifier using WEKA to automatically select features and the neural network topology from real data classified by an expert. Their classifier achieved slightly better results than SExtractor and used fewer computational resources. However, more domain specific information is still needed to build a better star/galaxy separator.
This document summarizes a paper presentation on unsupervised image-to-image translation networks. The presentation discusses how unsupervised translation can be achieved between domains X1 and X2 using a shared latent space assumption, where corresponding images from different domains are assumed to have similar latent representations. It also references previous work on coupled generative adversarial networks and variational autoencoders that utilize a shared latent space for domain adaptation tasks.
K-means clustering is an unsupervised learning algorithm that groups unlabeled data points into a specified number of clusters based on their distances from initial random cluster centroids. It works by first randomly selecting cluster centroids, then assigning each data point to the closest centroid and adjusting the centroid positions iteratively until the clusters are stable or the maximum number of iterations is reached. Choosing the optimal number of clusters is important for accurate clustering results.
Nirmal K. Bose was a world-renowned expert in signals and systems theory who made important contributions in unifying a broad range of engineering and applied mathematics problems under the framework of multidimensional signals and systems. He served as a professor at Pennsylvania State University from 1986 until his death in 2009, where he authored or edited fifteen books and founded the journal Multidimensional Systems and Signal Processing, which he edited for nineteen years. Bose received his Ph.D. in Electrical Engineering from Syracuse University and was renowned for his work on multidimensional systems theory, artificial neural networks, signal processing, and other areas.
Analyzing Resilience to Computational Glitches in Island-based Evolutionary A...Rafael Nogueras
The document analyzes the resilience of island-based evolutionary algorithms to computational glitches in irregular computing environments like peer-to-peer networks and volunteer computing. It finds that island-based evolutionary algorithms can tolerate computational glitches like communication delays and temporary process deactivations with only moderate performance degradation at moderately high latency and deactivation rates. Future work could study harder failure scenarios, additional algorithm variants and network topologies, and varying migration parameters.
Computers excel at identifying patterns in large datasets while humans can infer patterns from just a few examples. Researchers at MIT have created a new system that bridges this gap by allowing computers to teach and learn from only a few examples like humans. The system was presented at the Neural Information Processing Society's conference.
This document summarizes and compares different clustering algorithms that can be used for network anomaly detection. It proposes a method that first applies clustering algorithms like k-means, hierarchical, and expectation maximization clustering to partition network traffic data into clusters. It then applies the ID3 decision tree algorithm on each cluster to classify instances as normal or anomalous. The performance of this combined method is compared to using just the clustering or ID3 algorithms individually. Real network data sets are used to evaluate performance based on various metrics. The combined method is found to outperform the individual algorithms. The document also reviews several other related works applying clustering and decision trees for network anomaly detection and privacy-preserving data mining.
This document summarizes the education and research experience of Li Zhijie. It details that he received a PhD in Wireless Communication from Southwest Jiaotong University from 2004-2012, and received a Master's degree in Simulation from SAE College from 1993-1996. His research focused on wireless resource management and cross-layer design, resulting in 9 published papers and 1 patent. He worked on projects supported by the National Natural Science Foundation of China regarding wireless mesh networks and relay networks. His dissertation was on wireless resource management and cross-layer design for WLAN.
This document discusses biological neurons, artificial neurons, and cellular neural networks (CNNs). It provides an overview of CNNs, including their history, architecture, applications, advantages, and future scope. CNNs were proposed to reduce the number of interconnections between neurons in neural networks by only connecting neurons within a local neighborhood. A CNN is an array of dynamical systems with local connections only. Each cell in the CNN interacts with neighboring cells.
Invited lecture on Machine Learning in Medicine at the joint "Integrated Omics" course of Hanze University and University Hospital UMCG, Groningen, The Netherlands
ABSTRACT: An artificial neural network (ANN) is an information processing construct inspired by the manner in which the brain processes information and were originally developed to mimic the learning process of the human brain. They have been increasingly used in the chemical industry for data analysis, process control, pattern identification, identification of drug targets, and the prediction of several physicochemical properties. This paper provides a brief introduction on neural networks and their applications to the chemical industry.
The document discusses an algorithm called Adaptive Multichannel Component Analysis (AMMCA) for separating image sources from mixtures using adaptively learned dictionaries. It begins by reviewing image denoising using learned dictionaries, then extends this to image separation from single mixtures. The key contribution is applying this approach to separating sources from multichannel mixtures by learning local dictionaries for each source during the separation process. The algorithm is described and simulated results are shown separating two images from a noisy mixture using the learned dictionaries. In conclusion, AMMCA is able to separate sources without prior knowledge of their sparsity domains by fusing dictionary learning into the separation process.
This document discusses artificial neural networks. It begins with an introduction and overview of the history and biological neuron model. It then explains the artificial neuron model and different learning methods like backpropagation. Applications of neural networks in areas like character recognition and stock market prediction are provided. The document concludes by discussing the advantages of neural networks like their ability to handle large amounts of data and learn continuously.
This document describes a study that developed a neuro-fuzzy system for predicting electricity consumption. The neuro-fuzzy system combines the learning capabilities of neural networks with the linguistic rule interpretation of fuzzy inference systems. It was applied to predict future electricity consumption in Northern Cyprus based on past consumption data. The system was trained using a supervised learning algorithm to determine optimal parameters. Simulation results showed the neuro-fuzzy system achieved more accurate predictions of electricity consumption than a neural network model alone, using fewer training epochs.
This document outlines the course EEL-5840 Elements of Machine Intelligence offered in the fall semester of 1999. The course introduces engineering concepts related to designing intelligent computer systems. Topics covered include reactive machines, search algorithms, problem representation and reasoning, rule-based deduction, AI programming in LISP and Prolog, and robot learning. Students complete bi-weekly programming assignments in LISP and projects implementing machine learning algorithms on robots. The course aims to provide both a classical and modern perspective on machine intelligence grounded in practical applications.
A Comparative Analysis of Feature Selection Methods for Clustering DNA SequencesCSCJournals
Large-scale analysis of genome sequences is in progress around the world, the major application of which is to establish the evolutionary relationship among the species using phylogenetic trees. Hierarchical agglomerative algorithms can be used to generate such phylogenetic trees given the distance matrix representing the dissimilarity among the species. ClustalW and Muscle are two general purpose programs that generates distance matrix from the input DNA or protein sequences. The limitation of these programs is that they are based on Smith-Waterman algorithm which uses dynamic programming for doing the pair-wise alignment. This is an extremely time consuming process and the existing systems may even fail to work for larger input data set. To overcome this limitation, we have used the frequency of codons usage as an approximation to find dissimilarity among species. The proposed technique further reduces the complexity by extracting only the significant features of the species from the mtDNA sequences using the techniques like frequent codons, codons with maximum range value or PCA technique. We have observed that the proposed system produces nearly accurate results in a significantly reduced running time.
The document describes a study that uses artificial neural networks (ANN), fuzzy inference systems (FIS), and adaptive neuro-fuzzy inference systems (ANFIS) to model and predict groundwater levels in the Thurinjapuram watershed in Tamil Nadu, India. Monthly rainfall and water level data from 1985 to 2008 were used as inputs, with one month ahead water level as the output. ANFIS performed best with lower error rates and higher correlation than ANN and FIS models according to statistical evaluations. Validation with unused 2009-2010 data showed ANFIS predictions were 80% accurate.
Using Neural Networks and 3D sensors data to model LIBRAS gestures recognitio...Gabriel Moreira
Paper entitled "Using Neural Networks and 3D sensors data to model LIBRAS gestures recognition", presented at II Symposium on Knowledge Discovery, Mining and Learning – KDMILE, USP, São Carlos, SP, Brazil.
20090219 The case for another systems biology modelling environmentJonathan Blakes
The document discusses the need for a new systems biology modelling environment. It provides context on systems biology and existing modelling approaches and software. It then makes the case that a new modelling environment could improve the user experience for biologists by making models easier to build and refine while allowing for more complex models at larger scales. Key details on existing challenges and the proposed new environment are outlined.
This document discusses computational methods for identifying metabolites from tandem mass spectrometry data. It begins with background on metabolites and challenges in identification. Common approaches are described, including mass spectra libraries, in silico fragmentation using rules or machine learning, and machine learning methods. Recent machine learning works are summarized, such as using kernels to model peak interactions, unsupervised methods to group metabolites by shared substructures, and automatically recommending substructures from mass spectra. The document concludes that metabolite identification is important for metabolomics and machine learning is key to recent advances.
The document discusses improving neural network classification of astronomical objects into stars and galaxies. It analyzes the classifier used in the SExtractor software, which uses a multi-layer perceptron neural network trained on simulated data. The authors build their own classifier using WEKA to automatically select features and the neural network topology from real data classified by an expert. Their classifier achieved slightly better results than SExtractor and used fewer computational resources. However, more domain specific information is still needed to build a better star/galaxy separator.
This document summarizes a paper presentation on unsupervised image-to-image translation networks. The presentation discusses how unsupervised translation can be achieved between domains X1 and X2 using a shared latent space assumption, where corresponding images from different domains are assumed to have similar latent representations. It also references previous work on coupled generative adversarial networks and variational autoencoders that utilize a shared latent space for domain adaptation tasks.
K-means clustering is an unsupervised learning algorithm that groups unlabeled data points into a specified number of clusters based on their distances from initial random cluster centroids. It works by first randomly selecting cluster centroids, then assigning each data point to the closest centroid and adjusting the centroid positions iteratively until the clusters are stable or the maximum number of iterations is reached. Choosing the optimal number of clusters is important for accurate clustering results.
Nirmal K. Bose was a world-renowned expert in signals and systems theory who made important contributions in unifying a broad range of engineering and applied mathematics problems under the framework of multidimensional signals and systems. He served as a professor at Pennsylvania State University from 1986 until his death in 2009, where he authored or edited fifteen books and founded the journal Multidimensional Systems and Signal Processing, which he edited for nineteen years. Bose received his Ph.D. in Electrical Engineering from Syracuse University and was renowned for his work on multidimensional systems theory, artificial neural networks, signal processing, and other areas.
Analyzing Resilience to Computational Glitches in Island-based Evolutionary A...Rafael Nogueras
The document analyzes the resilience of island-based evolutionary algorithms to computational glitches in irregular computing environments like peer-to-peer networks and volunteer computing. It finds that island-based evolutionary algorithms can tolerate computational glitches like communication delays and temporary process deactivations with only moderate performance degradation at moderately high latency and deactivation rates. Future work could study harder failure scenarios, additional algorithm variants and network topologies, and varying migration parameters.
Computers excel at identifying patterns in large datasets while humans can infer patterns from just a few examples. Researchers at MIT have created a new system that bridges this gap by allowing computers to teach and learn from only a few examples like humans. The system was presented at the Neural Information Processing Society's conference.
This document summarizes and compares different clustering algorithms that can be used for network anomaly detection. It proposes a method that first applies clustering algorithms like k-means, hierarchical, and expectation maximization clustering to partition network traffic data into clusters. It then applies the ID3 decision tree algorithm on each cluster to classify instances as normal or anomalous. The performance of this combined method is compared to using just the clustering or ID3 algorithms individually. Real network data sets are used to evaluate performance based on various metrics. The combined method is found to outperform the individual algorithms. The document also reviews several other related works applying clustering and decision trees for network anomaly detection and privacy-preserving data mining.
This document summarizes the education and research experience of Li Zhijie. It details that he received a PhD in Wireless Communication from Southwest Jiaotong University from 2004-2012, and received a Master's degree in Simulation from SAE College from 1993-1996. His research focused on wireless resource management and cross-layer design, resulting in 9 published papers and 1 patent. He worked on projects supported by the National Natural Science Foundation of China regarding wireless mesh networks and relay networks. His dissertation was on wireless resource management and cross-layer design for WLAN.
This document discusses biological neurons, artificial neurons, and cellular neural networks (CNNs). It provides an overview of CNNs, including their history, architecture, applications, advantages, and future scope. CNNs were proposed to reduce the number of interconnections between neurons in neural networks by only connecting neurons within a local neighborhood. A CNN is an array of dynamical systems with local connections only. Each cell in the CNN interacts with neighboring cells.
Invited lecture on Machine Learning in Medicine at the joint "Integrated Omics" course of Hanze University and University Hospital UMCG, Groningen, The Netherlands
ABSTRACT: An artificial neural network (ANN) is an information processing construct inspired by the manner in which the brain processes information and were originally developed to mimic the learning process of the human brain. They have been increasingly used in the chemical industry for data analysis, process control, pattern identification, identification of drug targets, and the prediction of several physicochemical properties. This paper provides a brief introduction on neural networks and their applications to the chemical industry.
The document discusses an algorithm called Adaptive Multichannel Component Analysis (AMMCA) for separating image sources from mixtures using adaptively learned dictionaries. It begins by reviewing image denoising using learned dictionaries, then extends this to image separation from single mixtures. The key contribution is applying this approach to separating sources from multichannel mixtures by learning local dictionaries for each source during the separation process. The algorithm is described and simulated results are shown separating two images from a noisy mixture using the learned dictionaries. In conclusion, AMMCA is able to separate sources without prior knowledge of their sparsity domains by fusing dictionary learning into the separation process.
This document discusses artificial neural networks. It begins with an introduction and overview of the history and biological neuron model. It then explains the artificial neuron model and different learning methods like backpropagation. Applications of neural networks in areas like character recognition and stock market prediction are provided. The document concludes by discussing the advantages of neural networks like their ability to handle large amounts of data and learn continuously.
This document describes a study that developed a neuro-fuzzy system for predicting electricity consumption. The neuro-fuzzy system combines the learning capabilities of neural networks with the linguistic rule interpretation of fuzzy inference systems. It was applied to predict future electricity consumption in Northern Cyprus based on past consumption data. The system was trained using a supervised learning algorithm to determine optimal parameters. Simulation results showed the neuro-fuzzy system achieved more accurate predictions of electricity consumption than a neural network model alone, using fewer training epochs.
This document outlines the course EEL-5840 Elements of Machine Intelligence offered in the fall semester of 1999. The course introduces engineering concepts related to designing intelligent computer systems. Topics covered include reactive machines, search algorithms, problem representation and reasoning, rule-based deduction, AI programming in LISP and Prolog, and robot learning. Students complete bi-weekly programming assignments in LISP and projects implementing machine learning algorithms on robots. The course aims to provide both a classical and modern perspective on machine intelligence grounded in practical applications.
A Comparative Analysis of Feature Selection Methods for Clustering DNA SequencesCSCJournals
Large-scale analysis of genome sequences is in progress around the world, the major application of which is to establish the evolutionary relationship among the species using phylogenetic trees. Hierarchical agglomerative algorithms can be used to generate such phylogenetic trees given the distance matrix representing the dissimilarity among the species. ClustalW and Muscle are two general purpose programs that generates distance matrix from the input DNA or protein sequences. The limitation of these programs is that they are based on Smith-Waterman algorithm which uses dynamic programming for doing the pair-wise alignment. This is an extremely time consuming process and the existing systems may even fail to work for larger input data set. To overcome this limitation, we have used the frequency of codons usage as an approximation to find dissimilarity among species. The proposed technique further reduces the complexity by extracting only the significant features of the species from the mtDNA sequences using the techniques like frequent codons, codons with maximum range value or PCA technique. We have observed that the proposed system produces nearly accurate results in a significantly reduced running time.
The document describes a study that uses artificial neural networks (ANN), fuzzy inference systems (FIS), and adaptive neuro-fuzzy inference systems (ANFIS) to model and predict groundwater levels in the Thurinjapuram watershed in Tamil Nadu, India. Monthly rainfall and water level data from 1985 to 2008 were used as inputs, with one month ahead water level as the output. ANFIS performed best with lower error rates and higher correlation than ANN and FIS models according to statistical evaluations. Validation with unused 2009-2010 data showed ANFIS predictions were 80% accurate.
Reflectivity Parameter Extraction from RADAR Images Using Back Propagation Al...IJECEIAES
This document discusses using backpropagation algorithms to extract reflectivity parameters from Doppler weather radar images. It begins with an introduction to pattern recognition using neural networks and an overview of artificial neural networks. It then describes different backpropagation algorithms that can be used for training multilayer perceptrons, including Levenberg-Marquardt, conjugate gradient, and resilient backpropagation. The document presents a method to preprocess Doppler radar images and use a neural network trained with backpropagation to identify colors in the image and estimate the corresponding reflectivity values based on a provided color scale. It analyzes using various backpropagation algorithms to identify colors in Doppler radar images and extract reflectivity information without human intervention.
International Journal of Computer Science and Security Volume (2) Issue (5)CSCJournals
The document proposes a new method to calculate the distance matrix for phylogenetic tree construction in less computational time compared to traditional multiple sequence alignment methods. The method estimates the score of aligning two sequences in O(m+n) time using a 4-step scanning algorithm, where m and n are the lengths of the sequences. It then generates the distance matrix from the score matrix using the Feng-Doolittle formula. A phylogenetic tree is constructed from the distance matrix using the neighbor-joining algorithm. The method is tested on datasets of BChE sequences from mammals and bacteria, and the trees are compared to those generated by ClustalX.
AN APPROACH FOR IRIS PLANT CLASSIFICATION USING NEURAL NETWORKijsc
Classification is a machine learning technique used to predict group membership for data instances. To simplify the problem of classification neural networks are being introduced. This paper focuses on IRIS plant classification using Neural Network. The problem concerns the identification of IRIS plant species on
the basis of plant attribute measurements. Classification of IRIS data set would be discovering patterns from examining petal and sepal size of the IRIS plant and how the prediction was made from analyzing the pattern to form the class of IRIS plant. By using this pattern and classification, in future upcoming years
the unknown data can be predicted more precisely. Artificial neural networks have been successfully applied to problems in pattern classification, function approximations, optimization, and associative memories. In this work, Multilayer feed- forward networks are trained using back propagation learning
algorithm.
An Approach for IRIS Plant Classification Using Neural Network ijsc
Classification is a machine learning technique used to predict group membership for data instances. To simplify the problem of classification neural networks are being introduced. This paper focuses on IRIS plant classification using Neural Network. The problem concerns the identification of IRIS plant species on the basis of plant attribute measurements. Classification of IRIS data set would be discovering patterns from examining petal and sepal size of the IRIS plant and how the prediction was made from analyzing the pattern to form the class of IRIS plant. By using this pattern and classification, in future upcoming years the unknown data can be predicted more precisely. Artificial neural networks have been successfully applied to problems in pattern classification, function approximations, optimization, and associative memories. In this work, Multilayer feed- forward networks are trained using back propagation learning algorithm.
Model of Differential Equation for Genetic Algorithm with Neural Network (GAN...Sarvesh Kumar
The work is carried on the application of differential equation (DE) and its computational technique of genetic algorithm and neural (GANN) in C#, which is frequently used in globalised world by human wings. Diagrammatical and flow chart presentation is the major concerned for easy undertaking of these two concepts with indication of its present and future application is the new initiative taken in this paper along with computational approaches in C#. Little observation has been also pointed during working, functioning and development process of above algorithm in C# under given boundary value condition of DE for genetic and neural. Operations of fitness function and Genetic operations were completed for behavioural transmission of chromosome.
Wavelet-based EEG processing for computer-aided seizure detection and epileps...IJERA Editor
Many Neurological disorders are very difficult to detect. One such Neurological disorder which we are going to discuss in this paper is Epilepsy. Epilepsy means sudden change in the behavior of a human being for a short period of time. This is caused due to seizures in the brain. Many researches are going onto detect epilepsy detection through analyzing EEG. One such method of epilepsy detection is proposed in this paper. This technique employs Discrete Wave Transform (DWT) method for pre-processing, Approximate Entropy (ApEn) to extract features and Artificial Neural Network (ANN) for classification. This paper presented a detailed survey of various methods that are being used for epilepsy detection and also proposes a wavelet based epilepsy detection method
This document presents research using artificial neural networks to identify toxic gases in real time. A multi-layer perceptron neural network was trained using data from a multi-sensor system that detected hydrogen sulfide, nitrogen dioxide, and their mixture. Features extracted from the sensor responses were used as inputs to the neural network. The network was trained online using backpropagation and achieved 100% accuracy classifying gases during training and 96.6% accuracy during testing, with low error rates. This model achieved better performance than previous methods and can identify low concentrations of toxic gases in real time, which has applications for air quality monitoring and safety.
NEURAL NETWORK FOR THE RELIABILITY ANALYSIS OF A SERIES - PARALLEL SYSTEM SUB...IAEME Publication
Artificial neural networks can achieve high computation rates by employing a massive number of simple processing elements with a high degree of connectivity between the elements. Neural networks with feedback connections provide a computing model capable of exploiting fine- grained parallelism to solve a rich class of complex problems. In this paper we discuss a complex series-parallel system subjected to finite common cause and finite human error failures and its reliability using neural network method.
International Journal of Engineering Research and DevelopmentIJERD Editor
The document provides a survey of research on sensor association rules for mining behavioral patterns from wireless sensor network data. Sensor association rules aim to discover temporal relationships between sensor nodes by detecting correlated events. Various approaches are discussed, including techniques for distributed in-network mining, handling data streams, reducing redundancy, and applying association rules to applications like missing data estimation. Overall, the survey finds that sensor association rules are an effective knowledge discovery technique for wireless sensor networks.
This document discusses using artificial neural networks for network intrusion detection. Specifically, it proposes a hybrid classification model that uses entropy-based feature selection to reduce the dataset, followed by four neural network techniques (RBFN, SOM, SMO, PART) for classification. It provides details on each neural network technique and the overall methodology, which uses 10-fold cross validation to evaluate performance based on standard criteria. The goal is to build an efficient intrusion detection system with low false alarms and high detection rates.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
This document discusses using a genetic algorithm approach to find the shortest driving time on mobile navigation devices. It proposes using constant length chromosomes to encode the route finding problem. The genetic algorithm is tested on networks of different sizes. Mutation is found to contribute greatly to achieving optimal solutions by maintaining genetic diversity. The genetic algorithm approach can find good solutions in reasonable time for large, complex networks that deterministic methods cannot solve efficiently.
Abstract— This presents a comprehensible neural network tree (CNNTREE). CNNTREE is a proposed general modular neural network structure, where each node in this tree is a comprehensible expert neural network (CENN). One advantage of using CNNTREE is that it is a “gray box”; because it can be interpreted easily for symbolic systems; where each node in the CNNTREE is equivalent for symbolic operator in the symbolic system. Another advantage of CNNTREE is that it could be trained as any normal multi layer feed forward neural network. An evolutionary algorithm is given for designing the CNNTREE. Back propagation is also checked as local learning algorithm that fits for real time learning constraints. The tree generalization and training performance are examined using experiments with a digit recognition problem.
This document summarizes a research paper that uses an artificial neural network approach to forecast stock market prices in India. The paper trains a feedforward neural network using a backpropagation algorithm on data from 5 Indian companies between 2004 and 2013. The network is tested in MATLAB to predict stock prices and calculate an error rate for accuracy. The neural network model is found to provide a computational method for predicting stock market movements based on historical price and volume data.
AN ANN APPROACH FOR NETWORK INTRUSION DETECTION USING ENTROPY BASED FEATURE S...IJNSA Journal
With the increase in Internet users the number of malicious users are also growing day-by-day posing a serious problem in distinguishing between normal and abnormal behavior of users in the network. This has led to the research area of intrusion detection which essentially analyzes the network traffic and tries to determine normal and abnormal patterns of behavior.In this paper, we have analyzed the standard NSL-KDD intrusion dataset using some neural network based techniques for predicting possible intrusions. Four most effective classification methods, namely, Radial Basis Function Network, SelfOrganizing Map, Sequential Minimal Optimization, and Projective Adaptive Resonance Theory have been applied. In order to enhance the performance of the classifiers, three entropy based feature selection methods have been applied as preprocessing of data. Performances of different combinations of classifiers and attribute reduction methods have also been compared.
Complexity and Quantum Information ScienceMelanie Swan
This document discusses using quantum information science and quantum computing to model complex systems like the human brain. It proposes the "AdS/Brain Theory of Neural Signaling" which uses wavefunctions, tensor networks, and neural field theories at different scales from brain networks to molecules. Quantum computing could provide a new platform to model the brain across its nine orders of magnitude of complexity and help complete the human connectome by handling the large data and processing requirements. The AdS/Brain theory represents the first application of the AdS/CFT correspondence across multiple scales of the brain.
Behavior study of entropy in a digital image through an iterative algorithmijscmcj
Image segmentation is a critical step in computer vision tasks constituting an essential issue for pattern recognition and visual interpretation. In this paper, we study the behavior of entropy in digital images through an iterative algorithm of mean shift filtering. The order of a digital image in gray levels is defined. The behavior of Shannon entropy is analyzed and then compared, taking into account the number of iterations of our algorithm, with the maximum entropy that could be achieved under the same order. The use of equivalence classes it induced, which allow us to interpret entropy as a hyper-surface in real m dimensional space. The difference of the maximum entropy of order n and the entropy of the image is used to group the the iterations, in order to caractrizes the performance of the algorithm.
Similar to Photo Rendering with Swarms: From Figurative to Abstract Pherogenic Imaging (20)
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Photo Rendering with Swarms: From Figurative to Abstract Pherogenic Imaging
1. Carlos M. Fernandes, Antonio M. Mora, J.J. Merelo
(University of Granada)
C. Cotta
(University of Malaga)
A.C. Rosa
(Technical University of Lisbon)
2013 IEEE Symposium Series on Computational Intelligence
2. } KANTS is swarm
intelligence clustering
algorithm.
} Uses stigmergy –
communication with and
via the environment – as
the basic rule.
} Data samples are the
swarm. They communicate
and self-organize into
clusters of similar
samples.
2013 IEEE Symposium Series on Computational Intelligence
3. } Swarm: data samples
} Habitat: grid of cells
◦ Each cell has a vector – randomly initialized - with same
cardinality as the data samples vectors.
} Three rules:
◦ R1: move towards regions with more similar vectors.
◦ R2: update cell vector
◦ R3: evaporation - in each time-step, every vector of the
grid is again “pulled” towards its initial value.
2013 IEEE Symposium Series on Computational Intelligence
4. } Result: ants/data samples tend to cluster.
2013 IEEE Symposium Series on Computational Intelligence
Example: Iris data set (quantifies the morphologic variation of Iris
flowers of three related species)
Three classes: red: Setos, green: Versicolor, blue: Virginica
t=0 t=50 t=100 t=150
5. 2013 IEEE Symposium Series on Computational Intelligence
Rule 2: update vectors
Rule 3: Evaporation
Rule 1: Move
Please note parameters
β and δ. They define
how the ants move.
Pi!j =
w( j)r( j)
w(k)r(k)
k"M
#
6. 2013 IEEE Symposium Series on Computational Intelligence
The swarm after 1000 iterations with different values for β and δ
There is a region of the parameter space in which the
system self-organizes.
7. } Remember Rule #2!
◦ Ants change the values of the habitat vectors.
} Visualize the habitat grid.
◦ One variable – grey-level image
◦ Three variables – RGB or Lab.
◦ Four variables – 3-dimensional coloured image?
◦ More than four variables...
2013 IEEE Symposium Series on Computational Intelligence
data
samples
KANTS Grid
Translate
to RGB
8. 2013 IEEE Symposium Series on Computational Intelligence
Pherographia (drawing with pheromones) is based on an algorithm
with same basic principles as KANTS. The algorithm detects the edges
of grayscale images.
9. 2013 IEEE Symposium Series on Computational Intelligence
Carlos M. Fernandes, The Horse and the Ants, 2008
C.M. Fernandes, Pherographia: Drawing by Ants, Leonardo 43(2), pp. 107-112, April 2010
10. } Sleep data is used as input of the system.
} Hjorth parameters describe sleep
Electroencephalogram (EEG) with three-
variable vectors.
} Translation to RGB is trivial and direct.
} In a way, the images are representations of a
person’s sleep.
} Each person and each person’s night sleep
generates a different image: fingerprints of
sleep.
2013 IEEE Symposium Series on Computational Intelligence
12. 2013 IEEE Symposium Series on Computational Intelligence
Data
samples=
list of RGB
vectors
KANTS
Grid
Extract the data samples (three-variable RGB vectors) directly from a
coloured image and then use these samples as KANTS output.
13. } Winner of the Evolutionary, Design and Competition Art
Competition (GECCO’12)
2013 IEEE Symposium Series on Computational Intelligence
Carlos M. Fernandes, Abstracting the Abstract #4 (after Kandinsky), 2012
14. 2013 IEEE Symposium Series on Computational Intelligence
Carlos M. Fernandes, Abstracting the Abstract #5 (after Miró), 2012
18. 2013 IEEE Symposium Series on Computational Intelligence
photo β=8 β=16 β=32
19. 2013 IEEE Symposium Series on Computational Intelligence
r = 10 r = 25 r = 50 r = 100
20. } Devise other forms of representation when
the cardinality of the vectors is >3
} 4-variables: maybe 3-dimensional
representations.
} Many variables. How to represent them?
2013 IEEE Symposium Series on Computational Intelligence