3. Contents:
▪ What is a Neural Network?
▪ Why use Neural Networks?
▪ History and evolutions
▪ An engineering approach
▪ Architecture of Neural Networks
NEURAL NETWORKS
4. What is Neural Network?
▪ An Artificial Neural Network (ANN) is an information processing
paradigm that is inspired by the way biological nervous systems, such
as the brain, process information.
▪ It consists of large number of highly interconnected neurons in it to
carry information.
▪ ANNs learn by example which we given as the data's.
▪ Ex:Pattern recognition or data classification, through a learning
process.
NEURAL NETWORKS
5. ▪ Neural Network: A computational model that works in a similar way to
the neurons in the human brain.
▪ Each neuron takes an input, performs some operations then passes the
output to the following neuron.
NEURAL NETWORKS
6. Why use Neural Network?
▪ Neural networks, with their remarkable ability to derive and detect
trends that are too complex to be noticed by either humans or other
computer techniques.
▪ A trained neural network can be thought of as an "expert" in the
category of information it has been given to analyse.
▪ Other advantages include:
NEURAL NETWORKS
7. Advantage of Neural Network
▪ Adaptive learning: An ability to learn how to do tasks based on the
data given for training or initial experience.
▪ Self-Organisation: An ANN can create its own organisation or
representation of the information it receives during learning time.
NEURAL NETWORKS
8. History and evolutions
▪ Neural network simulations appear to be a recent development.
However, this field was established before the advent of computers,
and has survived at least one major setback and several eras.
▪ In 1943, neurophysiologistWarren McCulloch and mathematician
Walter Pitts wrote a paper on how neurons might work.
NEURAL NETWORKS
9. ▪ As computers became more advanced in the 1950's, it was finally
possible to simulate a hypothetical neural network.The first step
towards this was made by Nathanial Rochester from the IBM
research laboratories. Unfortunately for him, the first attempt to do
so failed.
▪ In 1959, BernardWidrow and Marcian Hoff of Stanford developed
models called "ADALINE" and "MADALINE." MADALINE was the first
neural network applied to a real world problem, using an adaptive
filter that eliminates echoes on phone lines.
▪ The first multi-layered network was developed in 1975, an
unsupervised network.
NEURAL NETWORKS
10. An engineering approach:
SIMPLE NEURON:
▪ An artificial neuron is a device with many inputs and one output.
▪ The neuron has two modes of operation; the training mode and the
using mode. In the training mode, the neuron can be trained to fire
(or not), for particular input patterns.
▪ In the using mode, when a taught input pattern is detected at the
input, its associated output becomes the current output.
▪ If the input pattern does not belong in the taught list of input
patterns, the firing rule is used to determine whether to fire or not.
NEURAL NETWORKS
13. Feed forward Neural Network
▪ This neural network is one of the simplest form ofANN, where the
data or the input travels in one direction.The data passes through
the input nodes and exit on the output nodes.
NEURAL NETWORKS
14. Architecture of Neural Networks
NETWORK LAYER:
▪ The commonest type of artificial neural network consists of three
groups, or layers of units:
▪ a layer of "input" units is connected to a layer of "hidden" units,
which is connected to a layer of "output" units.
NEURAL NETWORKS
16. ▪ What is Deep Learning?
▪ Applications of Deep Learning
▪ Challenges in Deep Learning
▪ Advantages of Deep Learning
▪ Disadvantages of Deep Learning:
DEEP LEARNING
17. ▪ Deep learning is the branch of machine learning which is based on
artificial neural network architecture.
▪ An artificial neural network orANN uses layers of interconnected
nodes called neurons that work together to process and learn from
the input data.
▪ In a fully connected Deep neural network, there is an input layer and
one or more hidden layers connected one after the other.
▪ Each neuron receives input from the previous layer neurons or the
input layer.
DEEP LEARNING
18. ▪ The main applications of deep learning can be divided into:
▪ computer vision
▪ natural language processing (NLP)
▪ reinforcement learning
DEEP LEARNING
19. ▪ Data availability: It requires large amounts of data to learn from. For using
deep learning it’s a big concern to gather as much data for training.
▪ Computational Resources: For training the deep learning model, it is
computationally expensive because it requires specialized hardware like
GPUs andTPUs.
▪ Time-consuming: While working on sequential data depending on the
computational resource it can take very large even in days or months.
▪ Interpretability: Deep learning models are complex, it works like a black
box. it is very difficult to interpret the result.
▪ Overfitting: when the model is trained again and again, it becomes too
specialized for the training data, leading to overfitting and poor
performance on new data.
DEEP LEARNING
20. ▪ High accuracy: Deep Learning algorithms can achieve state-of-the-art
performance in various tasks, such as image recognition and natural
language processing.
▪ Automated feature engineering: Deep Learning algorithms can
automatically discover and learn relevant features from data without the
need for manual feature engineering.
▪ Scalability: Deep Learning models can scale to handle large and complex
datasets, and can learn from massive amounts of data.
▪ Flexibility: Deep Learning models can be applied to a wide range of tasks
and can handle various types of data, such as images, text, and speech.
▪ Continual improvement: Deep Learning models can continually improve
their performance as more data becomes available.
DEEP LEARNING
21. ▪ High computational requirements: Deep Learning models require
large amounts of data and computational resources to train and
optimize.
▪ Requires large amounts of labeled data: Deep Learning models often
require a large amount of labeled data for training, which can be
expensive and time- consuming to acquire.
▪ Interpretability: Deep Learning models can be challenging to
interpret, making it difficult to understand how they make decisions.
DEEP LEARNING