This document outlines topics on error backpropagation training algorithms, Kohonen self-organizing maps, and Hopfield neural networks. It then lists several applications of artificial neural networks, including statistical pattern recognition, control of robotics and industrial processes, automatic synthesis of digital systems, adaptive telecommunications, image compression, radar classification, optimization problems, sentence understanding, and applying expertise to conceptual domains.
The document describes self-organized maps and includes two case studies on their applications. It outlines topics on self-organized maps including applications, architectures, and algorithms. It then describes two case studies, one on land use classification using ASTER satellite data and another on classification of Antarctic satellite imagery. The document concludes by providing references for more information on self-organized maps and neural networks.
The document discusses key concepts in Internet of Things (IoT) design including:
1) Defining IoT as physical objects connected to the internet via sensors and controllers.
2) The importance of usability (UI/UX design) and designing for both physical appearance and logical functionality.
3) Approaches like "calm technology" that engages users' peripheral attention in a subtle rather than obtrusive way.
The document discusses object-oriented analysis and design concepts. It introduces key concepts like objects, classes, encapsulation, inheritance etc. It then describes Object Modeling Technique (OMT), which is an object-oriented modeling methodology developed in 1991. OMT consists of three models - object model, dynamic model and functional model. It also discusses Unified Modeling Language (UML) conceptual model, including building blocks like things, relationships and diagrams. It describes different structural things, behavioral things and grouping things in UML. Finally, it covers various relationship types in UML like dependency, association, generalization etc.
A Reference architecture for the Internet of things WSO2
This document discusses WSO2 products and capabilities. It mentions several WSO2 products and services for API management, integration, and identity and access management. It concludes by inviting the reader to contact them for more information on how WSO2 products can address their needs.
As new technologies are emerging, It is giving rise to immersive and seamless interactions between devices and systems. This in turn giving rise to different use cases which has brought about many disruptions and innovations in last couple of years. Internet of things (IOT) has given a new outlook in which systems are getting developed, integrated and delivered.
www.facebook.com/iotians
4. Internet of Things - Reference Model and ArchitectureJitendra Tomar
Architecture Reference Model Introduction, Reference Model and architecture, IoT reference Model, Functional View, Information View, Deployment and Operational View, Real World Design Constraints- Introduction, Technical Design constraints, Data representation and visualization
In machine learning, a convolutional neural network is a class of deep, feed-forward artificial neural networks that have successfully been applied fpr analyzing visual imagery.
The document describes self-organized maps and includes two case studies on their applications. It outlines topics on self-organized maps including applications, architectures, and algorithms. It then describes two case studies, one on land use classification using ASTER satellite data and another on classification of Antarctic satellite imagery. The document concludes by providing references for more information on self-organized maps and neural networks.
The document discusses key concepts in Internet of Things (IoT) design including:
1) Defining IoT as physical objects connected to the internet via sensors and controllers.
2) The importance of usability (UI/UX design) and designing for both physical appearance and logical functionality.
3) Approaches like "calm technology" that engages users' peripheral attention in a subtle rather than obtrusive way.
The document discusses object-oriented analysis and design concepts. It introduces key concepts like objects, classes, encapsulation, inheritance etc. It then describes Object Modeling Technique (OMT), which is an object-oriented modeling methodology developed in 1991. OMT consists of three models - object model, dynamic model and functional model. It also discusses Unified Modeling Language (UML) conceptual model, including building blocks like things, relationships and diagrams. It describes different structural things, behavioral things and grouping things in UML. Finally, it covers various relationship types in UML like dependency, association, generalization etc.
A Reference architecture for the Internet of things WSO2
This document discusses WSO2 products and capabilities. It mentions several WSO2 products and services for API management, integration, and identity and access management. It concludes by inviting the reader to contact them for more information on how WSO2 products can address their needs.
As new technologies are emerging, It is giving rise to immersive and seamless interactions between devices and systems. This in turn giving rise to different use cases which has brought about many disruptions and innovations in last couple of years. Internet of things (IOT) has given a new outlook in which systems are getting developed, integrated and delivered.
www.facebook.com/iotians
4. Internet of Things - Reference Model and ArchitectureJitendra Tomar
Architecture Reference Model Introduction, Reference Model and architecture, IoT reference Model, Functional View, Information View, Deployment and Operational View, Real World Design Constraints- Introduction, Technical Design constraints, Data representation and visualization
In machine learning, a convolutional neural network is a class of deep, feed-forward artificial neural networks that have successfully been applied fpr analyzing visual imagery.
Here is the table with the characteristics of the given access technologies:
Access Technology | Wired/Wireless | Frequency Band | Topology | Range | Data Rate
-|-|-|-|-|-
IEEE 802.15.4 | Wireless | 2.4GHz ISM band | Star, Mesh | 10-100m | 20-250 kbps
IEEE 802.15.4g | Wireless | Sub-1GHz ISM bands | Star, Mesh | 100-1000m | 20-250 kbps
IEEE 1901.2a | Wired | Broadband over powerline | Star | Within building | Up to 500 Mbps
IEEE 802.11ah | Wireless | Sub-1GHz ISM bands |
This presentation covers:
What is IoT (Internet Of Things) ?
Brief History of IoT
IoT Architecture & Perspective
IoT Applications
IoT Challenges and Solutions
IoT future
Artificial Intelligence Game Search by ExamplesAhmed Gad
Describing how two game search strategies in artificial intelligence works by an example.
The two strategies presented are:
Minimax.
Alpha-Beta Pruning.
Find me on:
AFCIT
http://www.afcit.xyz
YouTube
https://www.youtube.com/channel/UCuewOYbBXH5gwhfOrQOZOdw
Google Plus
https://plus.google.com/u/0/+AhmedGadIT
SlideShare
https://www.slideshare.net/AhmedGadFCIT
LinkedIn
https://www.linkedin.com/in/ahmedfgad/
ResearchGate
https://www.researchgate.net/profile/Ahmed_Gad13
Academia
https://www.academia.edu/
Google Scholar
https://scholar.google.com.eg/citations?user=r07tjocAAAAJ&hl=en
Mendelay
https://www.mendeley.com/profiles/ahmed-gad12/
ORCID
https://orcid.org/0000-0003-1978-8574
StackOverFlow
http://stackoverflow.com/users/5426539/ahmed-gad
Twitter
https://twitter.com/ahmedfgad
Facebook
https://www.facebook.com/ahmed.f.gadd
Pinterest
https://www.pinterest.com/ahmedfgad/
The fifth lecture from the Machine Learning course series of lectures. It covers short history, basic types and most important principles of neural networks. A link to my github (https://github.com/skyfallen/MachineLearningPracticals) with practicals that I have designed for this course in both R and Python. I can share keynote files, contact me via e-mail: dmytro.fishman@ut.ee.
Image classification using convolutional neural networkKIRAN R
For separating the images from a large collection of images or from a large dataset this classifier can be used, Here deep neural network is used for training and classifying the images. The convolutional neural network is the most suitable algorithm for classifier images. This Classifier is a machine learning model, so the more you train it the more will be the accuracy.
IRJET- Air Pollution Prediction using Machine LearningIRJET Journal
This document describes a study that uses machine learning algorithms to predict air pollution levels in Pune, India. Specifically, it uses a multilayer perceptron neural network model to more accurately predict pollution levels compared to traditional linear regression. The study collects pollution data from 2000-2018 on pollutants like SO2, NO2, CO, PM10 and Ozone from the Central Pollution Control Board. It then preprocesses the data to handle missing values before training the multilayer perceptron model. The trained model is presented through a mobile app to provide accurate short-term air pollution predictions and help address Pune's significant air quality issues.
This document summarizes a dissertation submitted for the degree of Bachelor of Technology in Computer Science and Engineering. The dissertation analyzes sentiment of mobile reviews using supervised learning methods like Naive Bayes, Bag of Words, and Support Vector Machine. Five students conducted the research under the guidance of an internal guide. The document includes sections on introduction, literature survey of models used, system analysis and design including software and hardware requirements, implementation details, testing strategies and results. Screenshots of the three supervised learning methods are also provided.
The document discusses image classification using deep learning techniques. It introduces image classification and its goal to assign labels to images based on their content. It then discusses using the Anaconda platform and TensorFlow library for building neural networks to perform image classification in Python. Convolutional neural networks are proposed as an effective method, involving steps like convolution, pooling and fully connected layers to classify images. A demonstration of the technique and future applications like computer vision are also mentioned.
A fast-paced introduction to Deep Learning concepts, such as activation functions, cost functions, back propagation, and then a quick dive into CNNs. Basic knowledge of vectors, matrices, and derivatives is helpful in order to derive the maximum benefit from this session.
Internet of Things (IoT) is growing rapidly in decades, various applications came out from academia and industry. IoT is an amazing future to the Internet, but there remain some challenges to IoT for human have never dealt with so many devices and so much amount of data. Machine Learning (ML) is the technique that allows computers to learn from data without being explicitly programmed. Generally, the aim is to make predictions after learning and the process operates by building a model from the given (training) data and then makes predictions based on that model. Machine learning is closely related to artificial intelligence, pattern recognition and computational statistics and has strong relationship with mathematical optimization. In this talk, we focus on ML applications to IoT. Specially, we focus on the existing ML techniques that are suitable for IoT. We also consider the issues and challenges for solving the IoT problems using ML techniques.
This document provides an introduction to computer vision and discusses several key concepts. It describes common computer vision applications such as image recognition, object detection, image segmentation, video analysis, style transfer, and generating new images. It then explains how deep learning and neural networks are used for image classification. The document outlines the process of feature extraction using convolutional neural networks, which involve filtering images with convolution kernels to extract visual features like lines, colors and textures, detecting those features with ReLU, and condensing the images with maximum pooling. It discusses concepts like convolution and pooling windows, strides, and padding used in this process.
Node embedding techniques learn vector representations of nodes in a graph that can be used for downstream machine learning tasks like classification, clustering, and link prediction. DeepWalk uses random walks to generate sequences of nodes that are treated similarly to sentences, and learns embeddings by predicting nodes using their neighbors, like word2vec. It does not incorporate node features or labels. Node2vec extends DeepWalk by introducing a biased random walk to learn embeddings, addressing some limitations of DeepWalk while maintaining scalability.
This document provides an overview of an Internet of Things workshop that teaches participants how to connect sensors and actuators to microcontrollers and the internet. The workshop covers getting started with hardware like Arduino boards, measuring sensor values and controlling actuators, connecting devices to the internet using WiFi and Ethernet, and using cloud services like Xively to monitor sensors and control devices remotely. Hands-on activities include blinking an LED, reading a pushbutton switch, and sending sensor data to Xively to be displayed on a data dashboard.
This document discusses machine learning and provides examples of common machine learning algorithms. It begins with definitions of machine learning and the machine learning process. It then describes four main types of machine learning: supervised learning, unsupervised learning, reinforcement learning, and discusses five common algorithms - K-nearest neighbors, linear regression, decision trees, naive Bayes, and support vector machines. It concludes with an overview of a heart disease prediction mini-project using Python.
Data Science - Part XVII - Deep Learning & Image ProcessingDerek Kane
This lecture provides an overview of Image Processing and Deep Learning for the applications of data science and machine learning. We will go through examples of image processing techniques using a couple of different R packages. Afterwards, we will shift our focus and dive into the topics of Deep Neural Networks and Deep Learning. We will discuss topics including Deep Boltzmann Machines, Deep Belief Networks, & Convolutional Neural Networks and finish the presentation with a practical exercise in hand writing recognition technique.
https://imatge.upc.edu/web/publications/visual-question-answering-20
This bachelor's thesis explores dierent deep learning techniques to solve the Visual Question-Answering (VQA) task, whose aim is to answer questions about images. We study dierent Convolutional Neural Networks (CNN) to extract the visual representation from images: Kernelized-CNN (KCNN), VGG-16 and Residual Networks (ResNet). We also analyze the impact of using pre-computed word embeddings trained in large datasets (GloVe embeddings). Moreover, we examine dierent techniques of joining representations from dierent modalities. This work has been submitted to the second edition Visual Question Answering Challenge, and obtained a 43.48% of accuracy.
Unit 2(advanced class modeling & state diagram)Manoj Reddy
This document discusses state modeling concepts in UML including states, transitions, events, and state diagrams. It provides examples of state diagrams for a phone and traffic lights. States represent conditions an object can be in, such as idle or running. Transitions are changes between states triggered by events like receiving a call. State diagrams visually depict the flow between states.
Developed Project with 3 more colleagues for Pneumonia Detection from Chest X-ray images using Convolutional Neural Network. Used confusion matrix, Recall, Precision for check the model performance on testing Data
Brief and overall introduction to Artificial Neural Network (ANN).
-history of ANN
-learning technique (backpropagation)
-Generations of Neural net from 1st to 3rd
Here is the table with the characteristics of the given access technologies:
Access Technology | Wired/Wireless | Frequency Band | Topology | Range | Data Rate
-|-|-|-|-|-
IEEE 802.15.4 | Wireless | 2.4GHz ISM band | Star, Mesh | 10-100m | 20-250 kbps
IEEE 802.15.4g | Wireless | Sub-1GHz ISM bands | Star, Mesh | 100-1000m | 20-250 kbps
IEEE 1901.2a | Wired | Broadband over powerline | Star | Within building | Up to 500 Mbps
IEEE 802.11ah | Wireless | Sub-1GHz ISM bands |
This presentation covers:
What is IoT (Internet Of Things) ?
Brief History of IoT
IoT Architecture & Perspective
IoT Applications
IoT Challenges and Solutions
IoT future
Artificial Intelligence Game Search by ExamplesAhmed Gad
Describing how two game search strategies in artificial intelligence works by an example.
The two strategies presented are:
Minimax.
Alpha-Beta Pruning.
Find me on:
AFCIT
http://www.afcit.xyz
YouTube
https://www.youtube.com/channel/UCuewOYbBXH5gwhfOrQOZOdw
Google Plus
https://plus.google.com/u/0/+AhmedGadIT
SlideShare
https://www.slideshare.net/AhmedGadFCIT
LinkedIn
https://www.linkedin.com/in/ahmedfgad/
ResearchGate
https://www.researchgate.net/profile/Ahmed_Gad13
Academia
https://www.academia.edu/
Google Scholar
https://scholar.google.com.eg/citations?user=r07tjocAAAAJ&hl=en
Mendelay
https://www.mendeley.com/profiles/ahmed-gad12/
ORCID
https://orcid.org/0000-0003-1978-8574
StackOverFlow
http://stackoverflow.com/users/5426539/ahmed-gad
Twitter
https://twitter.com/ahmedfgad
Facebook
https://www.facebook.com/ahmed.f.gadd
Pinterest
https://www.pinterest.com/ahmedfgad/
The fifth lecture from the Machine Learning course series of lectures. It covers short history, basic types and most important principles of neural networks. A link to my github (https://github.com/skyfallen/MachineLearningPracticals) with practicals that I have designed for this course in both R and Python. I can share keynote files, contact me via e-mail: dmytro.fishman@ut.ee.
Image classification using convolutional neural networkKIRAN R
For separating the images from a large collection of images or from a large dataset this classifier can be used, Here deep neural network is used for training and classifying the images. The convolutional neural network is the most suitable algorithm for classifier images. This Classifier is a machine learning model, so the more you train it the more will be the accuracy.
IRJET- Air Pollution Prediction using Machine LearningIRJET Journal
This document describes a study that uses machine learning algorithms to predict air pollution levels in Pune, India. Specifically, it uses a multilayer perceptron neural network model to more accurately predict pollution levels compared to traditional linear regression. The study collects pollution data from 2000-2018 on pollutants like SO2, NO2, CO, PM10 and Ozone from the Central Pollution Control Board. It then preprocesses the data to handle missing values before training the multilayer perceptron model. The trained model is presented through a mobile app to provide accurate short-term air pollution predictions and help address Pune's significant air quality issues.
This document summarizes a dissertation submitted for the degree of Bachelor of Technology in Computer Science and Engineering. The dissertation analyzes sentiment of mobile reviews using supervised learning methods like Naive Bayes, Bag of Words, and Support Vector Machine. Five students conducted the research under the guidance of an internal guide. The document includes sections on introduction, literature survey of models used, system analysis and design including software and hardware requirements, implementation details, testing strategies and results. Screenshots of the three supervised learning methods are also provided.
The document discusses image classification using deep learning techniques. It introduces image classification and its goal to assign labels to images based on their content. It then discusses using the Anaconda platform and TensorFlow library for building neural networks to perform image classification in Python. Convolutional neural networks are proposed as an effective method, involving steps like convolution, pooling and fully connected layers to classify images. A demonstration of the technique and future applications like computer vision are also mentioned.
A fast-paced introduction to Deep Learning concepts, such as activation functions, cost functions, back propagation, and then a quick dive into CNNs. Basic knowledge of vectors, matrices, and derivatives is helpful in order to derive the maximum benefit from this session.
Internet of Things (IoT) is growing rapidly in decades, various applications came out from academia and industry. IoT is an amazing future to the Internet, but there remain some challenges to IoT for human have never dealt with so many devices and so much amount of data. Machine Learning (ML) is the technique that allows computers to learn from data without being explicitly programmed. Generally, the aim is to make predictions after learning and the process operates by building a model from the given (training) data and then makes predictions based on that model. Machine learning is closely related to artificial intelligence, pattern recognition and computational statistics and has strong relationship with mathematical optimization. In this talk, we focus on ML applications to IoT. Specially, we focus on the existing ML techniques that are suitable for IoT. We also consider the issues and challenges for solving the IoT problems using ML techniques.
This document provides an introduction to computer vision and discusses several key concepts. It describes common computer vision applications such as image recognition, object detection, image segmentation, video analysis, style transfer, and generating new images. It then explains how deep learning and neural networks are used for image classification. The document outlines the process of feature extraction using convolutional neural networks, which involve filtering images with convolution kernels to extract visual features like lines, colors and textures, detecting those features with ReLU, and condensing the images with maximum pooling. It discusses concepts like convolution and pooling windows, strides, and padding used in this process.
Node embedding techniques learn vector representations of nodes in a graph that can be used for downstream machine learning tasks like classification, clustering, and link prediction. DeepWalk uses random walks to generate sequences of nodes that are treated similarly to sentences, and learns embeddings by predicting nodes using their neighbors, like word2vec. It does not incorporate node features or labels. Node2vec extends DeepWalk by introducing a biased random walk to learn embeddings, addressing some limitations of DeepWalk while maintaining scalability.
This document provides an overview of an Internet of Things workshop that teaches participants how to connect sensors and actuators to microcontrollers and the internet. The workshop covers getting started with hardware like Arduino boards, measuring sensor values and controlling actuators, connecting devices to the internet using WiFi and Ethernet, and using cloud services like Xively to monitor sensors and control devices remotely. Hands-on activities include blinking an LED, reading a pushbutton switch, and sending sensor data to Xively to be displayed on a data dashboard.
This document discusses machine learning and provides examples of common machine learning algorithms. It begins with definitions of machine learning and the machine learning process. It then describes four main types of machine learning: supervised learning, unsupervised learning, reinforcement learning, and discusses five common algorithms - K-nearest neighbors, linear regression, decision trees, naive Bayes, and support vector machines. It concludes with an overview of a heart disease prediction mini-project using Python.
Data Science - Part XVII - Deep Learning & Image ProcessingDerek Kane
This lecture provides an overview of Image Processing and Deep Learning for the applications of data science and machine learning. We will go through examples of image processing techniques using a couple of different R packages. Afterwards, we will shift our focus and dive into the topics of Deep Neural Networks and Deep Learning. We will discuss topics including Deep Boltzmann Machines, Deep Belief Networks, & Convolutional Neural Networks and finish the presentation with a practical exercise in hand writing recognition technique.
https://imatge.upc.edu/web/publications/visual-question-answering-20
This bachelor's thesis explores dierent deep learning techniques to solve the Visual Question-Answering (VQA) task, whose aim is to answer questions about images. We study dierent Convolutional Neural Networks (CNN) to extract the visual representation from images: Kernelized-CNN (KCNN), VGG-16 and Residual Networks (ResNet). We also analyze the impact of using pre-computed word embeddings trained in large datasets (GloVe embeddings). Moreover, we examine dierent techniques of joining representations from dierent modalities. This work has been submitted to the second edition Visual Question Answering Challenge, and obtained a 43.48% of accuracy.
Unit 2(advanced class modeling & state diagram)Manoj Reddy
This document discusses state modeling concepts in UML including states, transitions, events, and state diagrams. It provides examples of state diagrams for a phone and traffic lights. States represent conditions an object can be in, such as idle or running. Transitions are changes between states triggered by events like receiving a call. State diagrams visually depict the flow between states.
Developed Project with 3 more colleagues for Pneumonia Detection from Chest X-ray images using Convolutional Neural Network. Used confusion matrix, Recall, Precision for check the model performance on testing Data
Brief and overall introduction to Artificial Neural Network (ANN).
-history of ANN
-learning technique (backpropagation)
-Generations of Neural net from 1st to 3rd
Introduction Of Artificial neural networkNagarajan
The document summarizes different types of artificial neural networks including their structure, learning paradigms, and learning rules. It discusses artificial neural networks (ANN), their advantages, and major learning paradigms - supervised, unsupervised, and reinforcement learning. It also explains different mathematical synaptic modification rules like backpropagation of error, correlative Hebbian, and temporally-asymmetric Hebbian learning rules. Specific learning rules discussed include the delta rule, the pattern associator, and the Hebb rule.
This document provides an overview of artificial neural networks (ANN). It discusses the origin of ANNs from biological neural networks. It describes different ANN architectures like multilayer perceptrons and different learning methods like backpropagation. It also outlines some challenging problems that ANNs can help with, such as pattern recognition, clustering, and optimization. The summary states that while the paper gives a good overview of ANNs, more development is needed to show ANNs are better than other methods for most problems.
- The document introduces artificial neural networks, which aim to mimic the structure and functions of the human brain.
- It describes the basic components of artificial neurons and how they are modeled after biological neurons. It also explains different types of neural network architectures.
- The document discusses supervised and unsupervised learning in neural networks. It provides details on the backpropagation algorithm, a commonly used method for training multilayer feedforward neural networks using gradient descent.
This document discusses various applications of neural networks, including pattern recognition, autonomous vehicles, medicine, sports prediction, and virus detection. Some key applications mentioned are using neural networks for patient diagnosis, detecting coronary artery disease from medical images, predicting sports outcomes based on team statistics, and forecasting space weather events. The document also notes some limitations of neural networks, such as requiring large datasets and not providing explanations for decisions.
Introduction to Artificial Neural Network Qingkai Kong
This is the slides I created for the workshop at Berkeley D-Lab - Introduction to Artificial Neural Networks (ANN). It consists the basics of ANN, intuitive examples, and python implementation of the ANN. You can find rest of the materials (notebooks) at https://github.com/qingkaikong/20161202_ANN_basics.
This document provides an introduction to artificial neural networks. It discusses how neural networks can mimic the brain's ability to learn from large amounts of data. The document outlines the basic components of a neural network including neurons, layers, and weights. It also reviews the history of neural networks and some common modern applications. Examples are provided to demonstrate how neural networks can learn basic logic functions through adjusting weights. The concepts of forward and backward propagation are introduced for training neural networks on classification problems. Optimization techniques like gradient descent are discussed for updating weights to minimize error. Exercises are included to help understand implementing neural networks for regression and classification tasks.
Neural networks are computing systems inspired by biological neural networks in the brain. They are composed of interconnected artificial neurons that process information using a connectionist approach. Neural networks can be used for applications like pattern recognition, classification, prediction, and filtering. They have the ability to learn from and recognize patterns in data, allowing them to perform complex tasks. Some examples of neural network applications discussed include face recognition, handwritten digit recognition, fingerprint recognition, medical diagnosis, and more.
Neural Network Classification and its Applications in Insurance IndustryInderjeet Singh
This document summarizes the use of neural networks for classification tasks. It discusses the advantages and disadvantages of neural networks for classification. It also presents a case study on using a neural network to classify insurance customers as likely to renew or terminate their policies based on attributes like age and zip code. The neural network achieved higher accuracy than decision trees and regression analysis on the insurance data set.
The document discusses artificial neural networks. It describes their basic structure and components, including dendrites that receive input signals, a soma that processes the inputs, and an axon that transmits output signals. It also explains how neurons are connected at synapses to transfer signals between neurons. Finally, it mentions different types of activation functions that can be used in neural networks.
Self-Organising Maps for Customer Segmentation using R - Shane Lynn - Dublin Rshanelynn
Self-Organising maps for Customer Segmentation using R.
These slides are from a talk given to the Dublin R Users group on 20th January 2014. The slides describe the uses of customer segmentation, the algorithm behind Self-Organising Maps (SOMs) and go through two use cases, with example code in R.
Accompanying code and datasets now available at http://shanelynn.ie/index.php/self-organising-maps-for-customer-segmentation-using-r/.
Kohonen self-organizing maps (SOMs) are a type of neural network that performs unsupervised learning to produce a low-dimensional representation of input patterns. SOMs were developed in the 1980s by Professor Tuevo Kohonen and work by mapping multi-dimensional input onto a two-dimensional grid. The algorithm finds groups in the data by finding similarities between input vectors and weight vectors in the nodes. It adjusts the weights to better match the input through competitive learning without supervision. SOMs have been used for applications like document organization, poverty classification, and text-to-speech.
The document provides an overview of self-organizing maps (SOM). It defines SOM as an unsupervised learning technique that reduces the dimensions of data through the use of self-organizing neural networks. SOM is based on competitive learning where the closest neural network unit to the input vector (the best matching unit or BMU) is identified and adjusted along with neighboring units. The algorithm involves initializing weight vectors, presenting input vectors, identifying the BMU, and updating weights of the BMU and neighboring units. SOM can be used for applications like dimensionality reduction, clustering, and visualization.
This document summarizes artificial neural networks. It discusses how neural networks are composed of interconnected neurons that can learn complex behaviors through simple principles. Neural networks can be used for applications like pattern recognition, noise reduction, and prediction. The key components of neural networks are neurons, synapses, weights, thresholds, and activation functions. Neural networks offer advantages like adaptability and fault tolerance, though they are not exact and can be complex. Examples of neural network applications discussed include object trajectory learning, radiosity for virtual reality, speechreading, target detection and tracking, and robotics.
How to bring innovation to your organization by streamlining the deployment process ?
IaaS, PaaS or Docker containers are all valid methods that can be tailored for your needs. They each come with advantages and drawbacks, and are opposed each day by vendors and providers along. Should we really impose a standard for every team ?
exoscale at the CloudStack User Group London - June 26th 2014Antoine COETSIER
The document provides an overview of exoscale, a cloud computing company based in Switzerland. It summarizes that exoscale offers open cloud computing, including compute instances, object storage, and platform services to deploy applications easily. It also notes that exoscale's datacenters are located in Geneva and offer a tier 3+ infrastructure with ISO certifications for quality and security. Pricing is provided on an hourly basis for compute instances and monthly for storage.
Cloud Computing Security Frameworks - our view from exoscaleAntoine COETSIER
With this short 15 min presentation done at the EPFL engineering school in Lausanne, May 2014, we brushed the concepts and recommandations when choosing and benchmarking cloud providers towards security.
The Cloud Security Alliance framework is at the moment the best matrix to establish such evaluation as it embraces the full service offered by provider and not only one aspect like Datacenter or Helpdesk.
Antoine Coetsier - CEO at Exoscale
An introductory presentation about the current state of personalization in (Web) search for Bibliotekarforbundet's series of 'gå-hjem-møder'. Presented on May 17, 2016 at Aalborg University Copenhagen.
This document discusses self-organizing neural networks, including Kohonen networks and Adaptive Resonance Theory (ART). Kohonen networks use competitive learning to form topological mappings between input and output layers. Neighboring units respond to similar inputs, and learning updates weights of both the winning unit and its neighbors. ART networks learn stable recognition codes in response to input sequences and address the stability-plasticity dilemma by resetting matches that fail a vigilance test.
The document discusses neural networks, generative adversarial networks, and image-to-image translation. It begins by explaining how neural networks learn through forward propagation, calculating loss, and using the loss to update weights via backpropagation. Generative adversarial networks are introduced as a game between a generator and discriminator, where the generator tries to fool the discriminator and vice versa. Image-to-image translation uses conditional GANs to translate images from one domain to another, such as maps to aerial photos.
Implementation of Back-Propagation Neural Network using Scilab and its Conver...IJEEE
Artificial neural network has been widely used for solving non-linear complex tasks. With the development of computer technology, machine learning techniques are becoming good choice. The selection of the machine learning technique depends upon the viability for particular application. Most of the non-linear problems have been solved using back propagation based neural network. The training time of neural network is directly affected by convergence speed. Several efforts are done to improve the convergence speed of back propagation algorithm. This paper focuses on the implementation of back-propagation algorithm and an effort to improve its convergence speed. The algorithm is written in SCILAB. UCI standard data set is used for analysis purposes. Proposed modification in standard backpropagation algorithm provides substantial improvement in the convergence speed.
The document discusses artificial neural networks (ANNs) and summarizes key information about soft computing techniques, ANNs, and some specific ANN models including perceptrons, ADALINE, and MADALINE. It defines soft computing as a collection of computational techniques including neural networks, fuzzy logic, and evolutionary computing. ANNs are modeled after the human brain and consist of interconnected neurons that can learn from examples. Perceptrons, ADALINE, and MADALINE are early ANN models that use different learning rules to update weights and biases.
The document discusses artificial neural networks (ANNs) and summarizes key information about ANNs and related topics. It defines soft computing as a field that aims to build intelligent machines using techniques like ANNs, fuzzy logic, and evolutionary computing. ANNs are modeled after biological neural networks and consist of interconnected nodes that can learn from data. Early ANN models like the perceptron, ADALINE, and MADALINE are described along with their learning rules and architectures. Applications of ANNs in various domains are also listed.
The document discusses soft computing and artificial neural networks. It provides an overview of soft computing techniques including artificial neural networks (ANNs), fuzzy logic, and evolutionary computing. It then focuses on ANNs, describing their biological inspiration from neurons in the brain. The basic components of ANNs are discussed including network architecture, learning algorithms, and activation functions. Specific ANN models are then summarized, such as the perceptron, ADALINE, and their learning rules. Applications of ANNs are also briefly mentioned.
The document discusses artificial neural networks and backpropagation. It provides an overview of backpropagation algorithms, including how they were developed over time, the basic methodology of propagating errors backwards, and typical network architectures. It also gives examples of applying backpropagation to problems like robotics, space robots, handwritten digit recognition, and face recognition.
This document discusses backpropagation neural networks. It begins with an introduction to backpropagation and gradient descent optimization. It then describes the architecture of a backpropagation network, including input, hidden, and output layers connected by weights. The training algorithm is explained in detail, including feedforward calculation, backpropagation of error, weight/bias updates, and activation functions. It concludes with discussions of initializing weights randomly or with the Nguyen-Widrow method and a graph showing error reduction over iterations.
Adaptive modified backpropagation algorithm based on differential errorsIJCSEA Journal
A new efficient modified back propagation algorithm with adaptive learning rate is proposed to increase the convergence speed and to minimize the error. The method eliminates initial fixing of learning rate through trial and error and replaces by adaptive learning rate. In each iteration, adaptive learning rate for output and hidden layer are determined by calculating differential linear and nonlinear errors of output layer and hidden layer separately. In this method, each layer has different learning rate in each iteration. The performance of the proposed algorithm is verified by the simulation results.
Digital Implementation of Artificial Neural Network for Function Approximatio...IOSR Journals
Abstract: The soft computing algorithms are being nowadays used for various multi input multi output complicated non linear control applications. This paper presented the development and implementation of back propagation of multilayer perceptron architecture developed in FPGA using VHDL. The usage of the FPGA (Field Programmable Gate Array) for neural network implementation provides flexibility in programmable systems. For the neural network based instrument prototype in real time application. The conventional specific VLSI neural chip design suffers the limitation in time and cost. With low precision artificial neural network design, FPGA have higher speed and smaller size for real time application than the VLSI design. The challenges are finding an architecture that minimizes the hardware cost, maximizing the performance, accuracy. The goal of this work is to realize the hardware implementation of neural network using FPGA. Digital system architecture is presented using Very High Speed Integrated Circuits Hardware Description Language (VHDL)and is implemented in FPGA chip. MATLAB ANN programming and tools are used for training the ANN. The trained weights are stored in different RAM, and is implemented in FPGA. The design was tested on a FPGA demo board. Keywords- Backpropagation, field programmable gate array (FPGA) hardware implementation, multilayer perceptron, pressure sensor, Xilinx FPGA.
Digital Implementation of Artificial Neural Network for Function Approximatio...IOSR Journals
: The soft computing algorithms are being nowadays used for various multi input multi output
complicated non linear control applications. This paper presented the development and implementation of back
propagation of multilayer perceptron architecture developed in FPGA using VHDL. The usage of the FPGA
(Field Programmable Gate Array) for neural network implementation provides flexibility in programmable
systems. For the neural network based instrument prototype in real time application. The conventional specific
VLSI neural chip design suffers the limitation in time and cost. With low precision artificial neural network
design, FPGA have higher speed and smaller size for real time application than the VLSI design. The
challenges are finding an architecture that minimizes the hardware cost, maximizing the performance,
accuracy. The goal of this work is to realize the hardware implementation of neural network using FPGA.
Digital system architecture is presented using Very High Speed Integrated Circuits Hardware Description
Language (VHDL)and is implemented in FPGA chip. MATLAB ANN programming and tools are used for
training the ANN. The trained weights are stored in different RAM, and is implemented in FPGA. The design
was tested on a FPGA demo board
This document discusses the process of backpropagation in neural networks. It begins with an example of forward propagation through a neural network with an input, hidden and output layer. It then introduces backpropagation, which uses the calculation of errors at the output to calculate gradients and update weights in order to minimize the overall error. The key steps are outlined, including calculating the error derivatives, weight updates proportional to the local gradient, and backpropagating error signals from the output through the hidden layers. Formulas for calculating each step of backpropagation are provided.
New Approach of Preprocessing For Numeral RecognitionIJERA Editor
The present paper proposes a new approach of preprocessing for handwritten, printed and isolated numeral
characters. The new approach reduces the size of the input image of each numeral by discarding the redundant
information. This method reduces also the number of features of the attribute vector provided by the extraction
features method. Numeral recognition is carried out in this work through k nearest neighbors and multilayer
perceptron techniques. The simulations have obtained a good rate of recognition in fewer running time.
Application of Artificial Neural Networking for Determining the Plane of Vibr...IOSRJMCE
In this paper a new approach for Artificial Neural Networking using Feed Forward Back Propagation Method and Levenberg-Marquardt backpropagation training function has been developed using Java Programming, where by directly feeding the RMS and Phase values of vibration, the unbalance plane can be detected with minimum error. In a Machine Fault Simulator RMS value and phase values of vibrations are collected from the four accelerometers placed in X and Y direction of Left and Right Bearings .Further these data are fed into the neural network for training purpose. In the testing phase of the neural network, the plane of vibration has been determined using different training algorithms available in MATLAB. Their prediction values have been compared with the actual value, errors for different training algorithms are calculated and a conclusion has been drawn for the best training function available for this current research work.
Machine learning allows computers to learn from data without being explicitly programmed. There are two main types of machine learning: supervised learning, where data points have known outcomes used to train a model to predict unknown outcomes, and unsupervised learning, where data points have unknown outcomes and the model finds hidden patterns in the data. Machine learning algorithms build a mathematical model based on sample data, known as "training data", in order to make predictions or decisions without being explicitly programmed to perform the task.
The document discusses building and training artificial neural networks from scratch. It describes multi-level feedforward neural networks with an input layer, hidden layers, and an output layer. Nodes between layers are fully connected. Training involves calculating gradients using the chain rule and updating weights proportionally via methods like stochastic gradient descent to minimize prediction error on the training data. Programming assignments will use neural networks to solve problems in parallel and distributed systems.
Le Song, Assistant Professor, College of Computing, Georgia Institute of Tech...MLconf
Understanding Deep Learning for Big Data: The complexity and scale of big data impose tremendous challenges for their analysis. Yet, big data also offer us great opportunities. Some nonlinear phenomena, features or relations, which are not clear or cannot be inferred reliably from small and medium data, now become clear and can be learned robustly from big data. Typically, the form of the nonlinearity is unknown to us, and needs to be learned from data as well. Being able to harness the nonlinear structures from big data could allow us to tackle problems which are impossible before or obtain results which are far better than previous state-of-the-arts.
Nowadays, deep neural networks are the methods of choice when it comes to large scale nonlinear learning problems. What makes deep neural networks work? Is there any general principle for tackling high dimensional nonlinear problems which we can learn from deep neural works? Can we design competitive or better alternatives based on such knowledge? To make progress in these questions, my machine learning group performed both theoretical and experimental analysis on existing and new deep learning architectures, and investigate three crucial aspects on the usefulness of the fully connected layers, the advantage of the feature learning process, and the importance of the compositional structures. Our results point to some promising directions for future research, and provide guideline for building new deep learning models.
Flow Trajectory Approach for Human Action RecognitionIRJET Journal
This document proposes a method for human action recognition in videos using scale-invariant feature transform (SIFT) and flow trajectory analysis. The key steps are:
1. Extract SIFT features from each video frame to detect keypoints.
2. Track the keypoints across frames and calculate the magnitude and direction of motion for each keypoint.
3. Analyze the tracked keypoints and their motion parameters to recognize the human action, such as walking, running, etc. occurring in the video.
The document discusses neural networks and their ability to perform non-linear classification. It describes how neural networks can learn complex patterns in data through multiple layers of nonlinear transformations. The key algorithms covered are the forward pass to perform inference and the backward pass using backpropagation for learning network weights. Backpropagation efficiently computes gradients through the network to optimize weights with gradient descent. The document provides examples of network architecture, activation functions, loss functions, and the mathematical details of backpropagation for multi-layer neural networks.
The document outlines effective research strategies presented by Professor Sanjay Shitole, including choosing an aligned topic of interest, conducting a thorough literature review to identify gaps, formulating a clear research question, using reliable methods for data collection and analysis, maintaining academic integrity through proper citations, submitting for peer review and incorporating feedback, collaborating with other researchers, and upholding high ethical standards.
The document discusses the benefits of undergraduate students writing research papers. It argues that research papers promote personal and academic growth by enhancing critical thinking and analytical skills. Writing research papers also improves students' chances of admission to further education programs and employment opportunities by demonstrating their research and problem-solving abilities. Overall, the document claims that writing research papers provides both short-term improvements in academic performance and long-term advantages for educational and career pursuits.
Professor Sanjay Shitole gave a presentation on understanding intellectual property rights. He began by defining intellectual property and listing the main types: copyright, patents, trademarks, and trade secrets. He described how each type protects creative works, inventions, brands, and confidential information. The presentation explained that intellectual property rights are important as they encourage innovation, provide legal protections for creators and inventors, and foster economic growth. It also discussed balancing these rights with public interests and the need for intellectual property education.
This document outlines topics related to machine learning predictive analytics including artificial neural networks and penalized linear regression. It provides an introduction to key machine learning concepts like what machine learning is, common algorithms like random forests and logistic regression, and performance measures for regression models. It also discusses choosing between linear and nonlinear models, working with training and test datasets, and an example of regression modeling to predict an outcome like spending based on attributes. The document aims to cover essential topics for understanding machine learning applications.
The document discusses using Python as a programming framework for Internet of Things (IoT) applications. It describes Micropython, an implementation of Python optimized for microcontrollers. Case studies presented include using an ESP32 microcontroller to build a remote controlled robot and a smart thermostat. The document advocates that Python is well-suited for rapid prototyping of IoT solutions due to its large library of modules, simple syntax, and ability to port code across different hardware platforms.
The document discusses image processing techniques used in remote sensing. It describes techniques such as image corrections including radiometric, geometric and atmospheric correction. It also discusses image transformations, enhancement and sharpening techniques as well as image smoothing. The document is a presentation given by Dr. Sanjay Shitole on these topics and references several sources for further information.
The document discusses modern trends in engineering, science and technology that will impact the future. It identifies several major trends such as cloud computing, high performance computing, big data analytics, machine learning, and others. Examples of companies leveraging these trends are provided. The document emphasizes that to be ready for the future, one needs to have the necessary skill set including skills in domains like mathematics, statistics, programming and databases. It also notes that the technological shift will make current practices redundant and that stability is important for long term survival.
Xfig is a vector graphics editor used to create diagrams. Vector images store graphics as collections of lines and vectors, allowing the images to be scaled to any size without losing quality. In contrast, bitmap images store graphics as collections of dots at fixed resolutions. Vector editors are generally better for page layout, logos, illustrations with sharp edges, technical diagrams, and flowcharts, as they can be easily modified. Bitmap editors are more suitable for photo editing and realistic illustrations.
Slides in this presentation are based on various talks attended at IIT Bombay. Also papers related to scientific writing published by IEEE are used in this study.
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
Understanding Inductive Bias in Machine LearningSUTEJAS
This presentation explores the concept of inductive bias in machine learning. It explains how algorithms come with built-in assumptions and preferences that guide the learning process. You'll learn about the different types of inductive bias and how they can impact the performance and generalizability of machine learning models.
The presentation also covers the positive and negative aspects of inductive bias, along with strategies for mitigating potential drawbacks. We'll explore examples of how bias manifests in algorithms like neural networks and decision trees.
By understanding inductive bias, you can gain valuable insights into how machine learning models work and make informed decisions when building and deploying them.
A review on techniques and modelling methodologies used for checking electrom...nooriasukmaningtyas
The proper function of the integrated circuit (IC) in an inhibiting electromagnetic environment has always been a serious concern throughout the decades of revolution in the world of electronics, from disjunct devices to today’s integrated circuit technology, where billions of transistors are combined on a single chip. The automotive industry and smart vehicles in particular, are confronting design issues such as being prone to electromagnetic interference (EMI). Electronic control devices calculate incorrect outputs because of EMI and sensors give misleading values which can prove fatal in case of automotives. In this paper, the authors have non exhaustively tried to review research work concerned with the investigation of EMI in ICs and prediction of this EMI using various modelling methodologies and measurement setups.
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
6th International Conference on Machine Learning & Applications (CMLA 2024)ClaraZara1
6th International Conference on Machine Learning & Applications (CMLA 2024) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of on Machine Learning & Applications.
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressionsVictor Morales
K8sGPT is a tool that analyzes and diagnoses Kubernetes clusters. This presentation was used to share the requirements and dependencies to deploy K8sGPT in a local environment.
Low power architecture of logic gates using adiabatic techniquesnooriasukmaningtyas
The growing significance of portable systems to limit power consumption in ultra-large-scale-integration chips of very high density, has recently led to rapid and inventive progresses in low-power design. The most effective technique is adiabatic logic circuit design in energy-efficient hardware. This paper presents two adiabatic approaches for the design of low power circuits, modified positive feedback adiabatic logic (modified PFAL) and the other is direct current diode based positive feedback adiabatic logic (DC-DB PFAL). Logic gates are the preliminary components in any digital circuit design. By improving the performance of basic gates, one can improvise the whole system performance. In this paper proposed circuit design of the low power architecture of OR/NOR, AND/NAND, and XOR/XNOR gates are presented using the said approaches and their results are analyzed for powerdissipation, delay, power-delay-product and rise time and compared with the other adiabatic techniques along with the conventional complementary metal oxide semiconductor (CMOS) designs reported in the literature. It has been found that the designs with DC-DB PFAL technique outperform with the percentage improvement of 65% for NOR gate and 7% for NAND gate and 34% for XNOR gate over the modified PFAL techniques at 10 MHz respectively.
International Conference on NLP, Artificial Intelligence, Machine Learning an...gerogepatton
International Conference on NLP, Artificial Intelligence, Machine Learning and Applications (NLAIM 2024) offers a premier global platform for exchanging insights and findings in the theory, methodology, and applications of NLP, Artificial Intelligence, Machine Learning, and their applications. The conference seeks substantial contributions across all key domains of NLP, Artificial Intelligence, Machine Learning, and their practical applications, aiming to foster both theoretical advancements and real-world implementations. With a focus on facilitating collaboration between researchers and practitioners from academia and industry, the conference serves as a nexus for sharing the latest developments in the field.
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
We have compiled the most important slides from each speaker's presentation. This year’s compilation, available for free, captures the key insights and contributions shared during the DfMAy 2024 conference.
1. Outline
Error Backpropagation Training Algorithm
Kohonen Self Organizing Map
Hopfield Neural Network
Applications of ANN
Sanjay Shitole
Department of Information Technology
Usha Mittal Institute of Technology for Women
SNDT Women’s University, Santacruz(w), Mumbai.
14 Oct 2011
Sanjay Shitole Applications of ANN
2. Outline
Error Backpropagation Training Algorithm
Kohonen Self Organizing Map
Hopfield Neural Network
Outline of Topics
1 Error Backpropagation Training Algorithm
2 Kohonen Self Organizing Map
Applications
Devanagari Character Recognition
3 Hopfield Neural Network
Applications
Architecture
Mathematical Foundation and algorithm
Deriving weight matrix
Storage Capacity
Case Study
Sanjay Shitole Applications of ANN
3. Outline
Error Backpropagation Training Algorithm
Kohonen Self Organizing Map
Hopfield Neural Network
Inputs
( Fixed
Input)
Layer Layer of neuronsk
z
v
v
v v
v
v
z
z1
i
zi
i−1
=−1
11
j1
1i ji
1i
vj−1,i
j1
jiv
y
1
j
j−1
j
=−1
y
y
y
wj
w
w
w
w
ww
w
w
11
1j
1J
K1
Kj
KJ
kJ
1J
j of neurons
−th column
−th column
−th column
of nodes of nodes
of nodes
i j
k
Dummy
neurons
(Fixed Input)
o
o
o
1
k
K
Sanjay Shitole Applications of ANN
4. Outline
Error Backpropagation Training Algorithm
Kohonen Self Organizing Map
Hopfield Neural Network
Detection of Lung Cancer
Introduction
Sanjay Shitole Applications of ANN
5. Outline
Error Backpropagation Training Algorithm
Kohonen Self Organizing Map
Hopfield Neural Network
Detection of Lung Cancer
Introduction
Current Medical Techniques
Sanjay Shitole Applications of ANN
6. Outline
Error Backpropagation Training Algorithm
Kohonen Self Organizing Map
Hopfield Neural Network
Figure: Block Diagram
Sanjay Shitole Applications of ANN
9. Outline
Error Backpropagation Training Algorithm
Kohonen Self Organizing Map
Hopfield Neural Network
Inputs
( Fixed
Input)
Layer Layer of neuronsk
z
v
v
v v
v
v
z
z1
i
zi
i−1
=−1
11
j1
1i ji
1i
vj−1,i
j1
jiv
y
1
j
j−1
j
=−1
y
y
y
wj
w
w
w
w
ww
w
w
11
1j
1J
K1
Kj
KJ
kJ
1J
j of neurons
−th column
−th column
−th column
of nodes of nodes
of nodes
i j
k
Dummy
neurons
(Fixed Input)
o
o
o
1
k
K
Sanjay Shitole Applications of ANN
10. Outline
Error Backpropagation Training Algorithm
Kohonen Self Organizing Map
Hopfield Neural Network
Given are P training pairs written as ((z1, d1), (z2, d2), . . . , (zp, dp))
, Where z1 and d1 are as explained in above section, zi is (I × 1),
di is (K × 1), and i = 1, 2, . . . , P. Note that the Ith component of
each zi is of value −1 since input vectors have been augmented.
Size J − 1 of the hidden layer having outputs y is selected. Note
that the Jth component of y is of value −1, since hidden layer
outputs have also been augmented; y is (J × 1) and o is (K × 1).
Sanjay Shitole Applications of ANN
11. Outline
Error Backpropagation Training Algorithm
Kohonen Self Organizing Map
Hopfield Neural Network
η > 0, Emax chosen. This value of η is used in next equations
used for weight adjustments.
Weights W and V are initialized at small random values; W is
(K × J), V is (J × I).
q ← 1, p ← 1, E ← 1
Training step starts here. Input is presented and the layer’s
outputs computed:
z ← zp, d ← zp
Where vj , a column vector, is the jth row of V.
ok ← f (wt
k y), fork=1,2,...,K
Where wk, a column vector, is the kth row of W.
Sanjay Shitole Applications of ANN
12. Outline
Error Backpropagation Training Algorithm
Kohonen Self Organizing Map
Hopfield Neural Network
Error value is computed:
E ←
1
2
(dk − ok)2
+ E fork = 1, 2, . . . , K
Error signal vectors δo and δy of both layers are computed.
Vectors δo is (K × 1), is (J × 1).
The error signal terms of the output layer in this step are
δok =
1
2
(dk − ok)(1 − o2
k ), fork = 1, 2, . . . , K
The error signal terms of the hidden layer in this step are
δyj =
1
2
(1 − o2
yj
)
K
k=1
δokwkj , forj = 1, 2, . . . , J
The steps of this algorithm are
Sanjay Shitole Applications of ANN
13. Outline
Error Backpropagation Training Algorithm
Kohonen Self Organizing Map
Hopfield Neural Network
Output layer weights are adjusted:
wkj ← wkj + ηδokyj , fork = 1, 2, . . . , Kandj = 1, 2, . . . , J
Hidden layer weights are adjusted:
vji ← vji + ηδyj zi , forj = 1, 2, . . . , Jandi = 1, 2, . . . , I
If p < P then p ← p + 1, q ← q + 1 , and go to step 2;
Otherwise, go to step 8.
The training cycle is completed. For E < Emax terminate the
training session. Output weights W,V,q, and E.
If E > Emax, then E ← 0, p ← 1, and initiate the new
training cycle by going to Step 2.
Sanjay Shitole Applications of ANN
14. Outline
Error Backpropagation Training Algorithm
Kohonen Self Organizing Map
Hopfield Neural Network
Devanagari Character Recognition
Applications
Statistical pattern recognition, especially recognition of
speech.
Sanjay Shitole Applications of ANN
15. Outline
Error Backpropagation Training Algorithm
Kohonen Self Organizing Map
Hopfield Neural Network
Devanagari Character Recognition
Applications
Statistical pattern recognition, especially recognition of
speech.
Control of robot arms, and other problems in robotics.
Sanjay Shitole Applications of ANN
16. Outline
Error Backpropagation Training Algorithm
Kohonen Self Organizing Map
Hopfield Neural Network
Devanagari Character Recognition
Applications
Statistical pattern recognition, especially recognition of
speech.
Control of robot arms, and other problems in robotics.
Control of industrial process, especially diffusion processes in
the production of semiconductor substrates.
Sanjay Shitole Applications of ANN
17. Outline
Error Backpropagation Training Algorithm
Kohonen Self Organizing Map
Hopfield Neural Network
Devanagari Character Recognition
Applications
Statistical pattern recognition, especially recognition of
speech.
Control of robot arms, and other problems in robotics.
Control of industrial process, especially diffusion processes in
the production of semiconductor substrates.
Automatic synthesis of digital systems.
Sanjay Shitole Applications of ANN
18. Outline
Error Backpropagation Training Algorithm
Kohonen Self Organizing Map
Hopfield Neural Network
Devanagari Character Recognition
Applications
Statistical pattern recognition, especially recognition of
speech.
Control of robot arms, and other problems in robotics.
Control of industrial process, especially diffusion processes in
the production of semiconductor substrates.
Automatic synthesis of digital systems.
Adaptive devices for various telecommunications tasks.
Sanjay Shitole Applications of ANN
19. Outline
Error Backpropagation Training Algorithm
Kohonen Self Organizing Map
Hopfield Neural Network
Devanagari Character Recognition
Applications
Statistical pattern recognition, especially recognition of
speech.
Control of robot arms, and other problems in robotics.
Control of industrial process, especially diffusion processes in
the production of semiconductor substrates.
Automatic synthesis of digital systems.
Adaptive devices for various telecommunications tasks.
Image compression.
Sanjay Shitole Applications of ANN
20. Outline
Error Backpropagation Training Algorithm
Kohonen Self Organizing Map
Hopfield Neural Network
Devanagari Character Recognition
Applications
Statistical pattern recognition, especially recognition of
speech.
Control of robot arms, and other problems in robotics.
Control of industrial process, especially diffusion processes in
the production of semiconductor substrates.
Automatic synthesis of digital systems.
Adaptive devices for various telecommunications tasks.
Image compression.
Radar classification of sea-ice.
Sanjay Shitole Applications of ANN
21. Outline
Error Backpropagation Training Algorithm
Kohonen Self Organizing Map
Hopfield Neural Network
Devanagari Character Recognition
Applications
Statistical pattern recognition, especially recognition of
speech.
Control of robot arms, and other problems in robotics.
Control of industrial process, especially diffusion processes in
the production of semiconductor substrates.
Automatic synthesis of digital systems.
Adaptive devices for various telecommunications tasks.
Image compression.
Radar classification of sea-ice.
Optimization problems.
Sanjay Shitole Applications of ANN
22. Outline
Error Backpropagation Training Algorithm
Kohonen Self Organizing Map
Hopfield Neural Network
Devanagari Character Recognition
Applications
Statistical pattern recognition, especially recognition of
speech.
Control of robot arms, and other problems in robotics.
Control of industrial process, especially diffusion processes in
the production of semiconductor substrates.
Automatic synthesis of digital systems.
Adaptive devices for various telecommunications tasks.
Image compression.
Radar classification of sea-ice.
Optimization problems.
Sentence understanding.
Sanjay Shitole Applications of ANN
23. Outline
Error Backpropagation Training Algorithm
Kohonen Self Organizing Map
Hopfield Neural Network
Devanagari Character Recognition
Applications
Statistical pattern recognition, especially recognition of
speech.
Control of robot arms, and other problems in robotics.
Control of industrial process, especially diffusion processes in
the production of semiconductor substrates.
Automatic synthesis of digital systems.
Adaptive devices for various telecommunications tasks.
Image compression.
Radar classification of sea-ice.
Optimization problems.
Sentence understanding.
Application of expertise in conceptual domain.
Sanjay Shitole Applications of ANN
24. Outline
Error Backpropagation Training Algorithm
Kohonen Self Organizing Map
Hopfield Neural Network
Devanagari Character Recognition
Applications
Statistical pattern recognition, especially recognition of
speech.
Control of robot arms, and other problems in robotics.
Control of industrial process, especially diffusion processes in
the production of semiconductor substrates.
Automatic synthesis of digital systems.
Adaptive devices for various telecommunications tasks.
Image compression.
Radar classification of sea-ice.
Optimization problems.
Sentence understanding.
Application of expertise in conceptual domain.
Classification of insect courtship songs.
Sanjay Shitole Applications of ANN
25. Outline
Error Backpropagation Training Algorithm
Kohonen Self Organizing Map
Hopfield Neural Network
Devanagari Character Recognition
Devanagari Character Recognition
Sanjay Shitole Applications of ANN
26. Outline
Error Backpropagation Training Algorithm
Kohonen Self Organizing Map
Hopfield Neural Network
Devanagari Character Recognition
Sanjay Shitole Applications of ANN
27. Outline
Error Backpropagation Training Algorithm
Kohonen Self Organizing Map
Hopfield Neural Network
Devanagari Character Recognition
Sanjay Shitole Applications of ANN
28. Outline
Error Backpropagation Training Algorithm
Kohonen Self Organizing Map
Hopfield Neural Network
Devanagari Character Recognition
Figure: SOM Grid
Sanjay Shitole Applications of ANN
29. Outline
Error Backpropagation Training Algorithm
Kohonen Self Organizing Map
Hopfield Neural Network
Devanagari Character Recognition
SOM Algorithm
Intialize the weights Wij (1 < i ≤ 64, 1 < j < m) to small
random values,where m is the total number of nodes in the
map.Set the initial radius of the neighborhood around node j
as Nj (t).
Present inputs X1(t), X2(t), X3(t), . . . , X64(t).
Calculate the distance dj between the inputs and node j by
dj = 64
i=1 (Xi (t) − Wij (t))2
.
Determine j∗which minimizes dj .
Update weights for j∗ and its neighbors mNj (t), the new
weights for j in Nj∗(t) are
Wij (t + 1) = Wij + α(t) (Xi (t) − Wij (t))
Where α(t) and Nj∗(t) are controlled so as to decrease in t.
If process reaches the maximum number of iterations,stop
otherwise go to step 2.Sanjay Shitole Applications of ANN
31. Outline
Error Backpropagation Training Algorithm
Kohonen Self Organizing Map
Hopfield Neural Network
Applications
Architecture
Mathematical Foundation and algorithm
Deriving weight matrix
Storage Capacity
Case Study
Content addressable memory: which involves the recall of
stored pattern by presenting a partial or distorted version of it
to the memory.
Sanjay Shitole Applications of ANN
32. Outline
Error Backpropagation Training Algorithm
Kohonen Self Organizing Map
Hopfield Neural Network
Applications
Architecture
Mathematical Foundation and algorithm
Deriving weight matrix
Storage Capacity
Case Study
Content addressable memory: which involves the recall of
stored pattern by presenting a partial or distorted version of it
to the memory.
Combinatorial optimization problems: The class of
optimization problems includes the Traveling salesman
problem
Sanjay Shitole Applications of ANN
33. Outline
Error Backpropagation Training Algorithm
Kohonen Self Organizing Map
Hopfield Neural Network
Applications
Architecture
Mathematical Foundation and algorithm
Deriving weight matrix
Storage Capacity
Case Study
Figure: Hopfield NetworkSanjay Shitole Applications of ANN
34. Outline
Error Backpropagation Training Algorithm
Kohonen Self Organizing Map
Hopfield Neural Network
Applications
Architecture
Mathematical Foundation and algorithm
Deriving weight matrix
Storage Capacity
Case Study
The total input neti of the ith neuron as
neti =
n
j=1
j=i
wij vj + ii − Ti i = 1, 2, ..., n
The external input to the ith neuron has been denoted here as ii .
Introducing the vector notation for synaptic weights and neuron
output, the total input neti of the ith neuron can be written as
neti = wt
i v + ii − Ti , for i = 1, 2, · · · , n
Sanjay Shitole Applications of ANN
35. Outline
Error Backpropagation Training Algorithm
Kohonen Self Organizing Map
Hopfield Neural Network
Applications
Architecture
Mathematical Foundation and algorithm
Deriving weight matrix
Storage Capacity
Case Study
The complete matrix description of the linear portion of the system
shown in Figure is given by net = Wv + i − T where
net
∆
=
net1
net2
...
netn
i
∆
=
i1
i2
...
in
t
∆
=
T1
T2
...
Tn
Sanjay Shitole Applications of ANN
36. Outline
Error Backpropagation Training Algorithm
Kohonen Self Organizing Map
Hopfield Neural Network
Applications
Architecture
Mathematical Foundation and algorithm
Deriving weight matrix
Storage Capacity
Case Study
Matrix W , sometimes called the connectivity matrix, is an n ∗ n
matrix containing network weights arranged in rows of vectors
equal to wj as defined and it is equal to
W =
wt
1
wt
2
...
wt
n
W =
0 w12 w13 · · · w1n
w21 0 w23 · · · w2n
...
...
...
...
...
wn1 wn2 · · · wn3 0
Sanjay Shitole Applications of ANN
37. Outline
Error Backpropagation Training Algorithm
Kohonen Self Organizing Map
Hopfield Neural Network
Applications
Architecture
Mathematical Foundation and algorithm
Deriving weight matrix
Storage Capacity
Case Study
Responce or Update rule of the ithneuron excited in
net = Wv + i − T
vi → −1 if neti < 0
vi → +1 if neti > 0
The update algorithm for a discrete-time recurrent network and we
can obtain the following update rule:
vk+1
i = sgn(wt
i vk
+ii −Ti ), for i = 1, 2, ..., n and k = 0, 1, ...
Where k denotes the index of recursive update.
Sanjay Shitole Applications of ANN
38. Outline
Error Backpropagation Training Algorithm
Kohonen Self Organizing Map
Hopfield Neural Network
Applications
Architecture
Mathematical Foundation and algorithm
Deriving weight matrix
Storage Capacity
Case Study
Energy
The scalar-valued energy function for the discussed system is a
quadratic form and has the matrix form
E
∆
= −
1
2
vt
Wv − it
v + tt
v
Sanjay Shitole Applications of ANN
39. Outline
Error Backpropagation Training Algorithm
Kohonen Self Organizing Map
Hopfield Neural Network
Applications
Architecture
Mathematical Foundation and algorithm
Deriving weight matrix
Storage Capacity
Case Study
Energy
Let us study the changes of the energy function for the system
which is allowed to update. Assume that the output node i has
been updated at the kth instant so that vk+−1
i − vk
i = ∆v,. Since
only the single neuron computes, the scheme is one of
asynchronous updates. Let us determine the related energy
increment in this case. Computing the energy gradient vector,
E = −
1
2
(W t
+ W )vt
− it
+ tt
Sanjay Shitole Applications of ANN
40. Outline
Error Backpropagation Training Algorithm
Kohonen Self Organizing Map
Hopfield Neural Network
Applications
Architecture
Mathematical Foundation and algorithm
Deriving weight matrix
Storage Capacity
Case Study
Energy
which reduces for symmetrical matrix W for which W t = W to
the form
E = −Wv − it
+ tt
The energy increment becomes equal:
∆E = ( E)t
∆v
Since only the ith output is updated.
Sanjay Shitole Applications of ANN
41. Outline
Error Backpropagation Training Algorithm
Kohonen Self Organizing Map
Hopfield Neural Network
Applications
Architecture
Mathematical Foundation and algorithm
Deriving weight matrix
Storage Capacity
Case Study
Energy
∆v
∆
=
0
...
∆vi
...
0
and the energy increment reduces to the form
∆E = (−wt
i v + it
i + ti )∆vi
This can be rewritten as:
∆E =
n
j=1
wij vj + ii + ti
∆vi
for j = i Sanjay Shitole Applications of ANN
42. Outline
Error Backpropagation Training Algorithm
Kohonen Self Organizing Map
Hopfield Neural Network
Applications
Architecture
Mathematical Foundation and algorithm
Deriving weight matrix
Storage Capacity
Case Study
To train the Hopfield Network say for x = [0, 1, 0, 1]
Step1: Convert [0, 1, 0, 1] to bipolar. This results in
x1 = [-1,1,-1,1].
Step2:Calculate the Transpose of x1 say y1
Step3:Multiply x1 and y1
Step4:Replace diagonal elements by 0
The final weight matrix is
.
0 −1 1 −1
−1 0 −1 1
1 −1 0 −1
−1 1 −1 0
.
Sanjay Shitole Applications of ANN
43. Outline
Error Backpropagation Training Algorithm
Kohonen Self Organizing Map
Hopfield Neural Network
Applications
Architecture
Mathematical Foundation and algorithm
Deriving weight matrix
Storage Capacity
Case Study
To Train the Hopfield Network for more number of patterns, all the
matrices created for each pattern should be added to get the final
weight matrix.
Sanjay Shitole Applications of ANN
44. Outline
Error Backpropagation Training Algorithm
Kohonen Self Organizing Map
Hopfield Neural Network
Applications
Architecture
Mathematical Foundation and algorithm
Deriving weight matrix
Storage Capacity
Case Study
1 Storage capacity scales
linearly with size N of the
network
2 Storage capacity must be
maintained small for the
fundamental memories to
be recoverable.
Mmax =
N
2logeN
200 400 600 800 1000
0
0
20
40
60
80
100
Without Errors
With Errors
Sanjay Shitole Applications of ANN
45. Outline
Error Backpropagation Training Algorithm
Kohonen Self Organizing Map
Hopfield Neural Network
Applications
Architecture
Mathematical Foundation and algorithm
Deriving weight matrix
Storage Capacity
Case Study
Sanjay Shitole Applications of ANN
46. Outline
Error Backpropagation Training Algorithm
Kohonen Self Organizing Map
Hopfield Neural Network
Applications
Architecture
Mathematical Foundation and algorithm
Deriving weight matrix
Storage Capacity
Case Study
Thank you
Sanjay Shitole Applications of ANN
47. Outline
Error Backpropagation Training Algorithm
Kohonen Self Organizing Map
Hopfield Neural Network
Applications
Architecture
Mathematical Foundation and algorithm
Deriving weight matrix
Storage Capacity
Case Study
Doubts???
email: shitoless@rediffmail.com
Sanjay Shitole Applications of ANN