Artificial neural networks (ANNs) are inspired by biological neural networks and are a type of machine learning. They follow principles of neuronal organization and learn by examples to make predictions. ANNs have multiple layers including an input layer, one or more hidden layers, and an output layer. They are often used for applications like image recognition, natural language processing, medical diagnosis, and autonomous vehicles. While ANNs can perform well on large datasets, they also face challenges including overfitting, data and computational requirements, and a lack of transparency.
Using PDB Relocation to Move a Single PDB to Another Existing CDB
Neural networking this is about neural networks
1. Date : 14ᵗʰ December 2023
Kathmandu Model Secondary
School
Artificial Neural
Networking
By: Rij Amatya
Pragyan Shrestha
Kritika Silwal
2. Branch of Machine learning models
Was proposed in 1944
Information processing model
A computational model inspired by the human brain's structure and function.
Follows principles of neuronal organization
Makes prediction based on what they’ve learned
Applied in the concept of AI
Learns by examples
Artificial Neural
Networking(ANN)
INTRODUCTION
3. Artificial Neural
Networks
Core of deep learning
Image recognition, NLP, Robotics, etc.
3 different layers
Input Layer
1.
Hidden Layer
2.
Output layer
3.
Sometimes referred to as the MLP (Multi-Layer
Perceptron).
4. The amount of data produced
has increased, and big data has
become a buzzword. Big data
made it easy to train ANNs.
While classical machine learning
algorithms fell short of analyzing
big data, artificial neural
networks performed well on big
data.
5. Biological Neural
Networks
A neural network consists of the cell body
containing the nucleus, many-branched dendrites,
and a long axon. As you can see in the image above,
the length of the axon is much longer than the cell
body. The axons are divided into many branches that
combine with the dendrites or cell bodies of other
neurons.
Biological neurons generate electrical signals that
travel along axons. If a neuron gets enough
stimulation, it fires. In general, biological neurons
work this way. This work may seem simple, but
billions of neural networks can be managed by
connecting neurons.
6. Artificial Neurons – Associated with the weights which contain
information about the input signal
Compromises interconnected nodes (neurons) organized in layers
Interconnected to each other by weighted links
7. As you can see, the inputs and output are
numbers and each input has a weight in this
architecture. These weighted inputs are
summed first, and then a bias is added. This
sum is passed through a step function. This
function can be, for example, a sign function.
Perceptron
A perceptron is the smallest unit of a neural
network. This architecture was developed by
Frank Rosenblatt in 1957. This is a simple
model of biological neuron in Artificial Neural
Network
8. An algorithm for binary classification can be
used. A simple threshold logic unit (TLU) is a
single linear function. This simple approach has
paved the way for developing AI tools like
ChatGPT.
The image shows a
perceptron with two inputs
and three outputs,
connected via a dense
layer. It is used for multi-
label classification.
9. How to train a perception?
First, assign a random weight to each input. The sum of the weighted inputs
is found, and then a bias is added to this sum. Note that, when one neuron
triggers another neuron frequently, the connection between them becomes
stronger. Inputs passing through neurons produce an output. This output is
a prediction. The actual value is compared with the prediction and the error
is calculated. Weights are updated to make predictions with fewer errors.
Perceptron was a nice approach, but failed to solve some simple problems
like XOR. To overcome the limitations of the perceptrons, the multilayer
perceptron was developed. Let’s dive into multilayer perceptrons.
If you learn how a perceptron is trained, you will have a better
understanding of how ANNs work. Let’s now discuss how to
train a perceptron.
10. Multi-layer
Perceptron
Multilayer perceptron consist of an input layer,
a hidden layer, and an output layer. As you can
see in the image above, there is a hidden layer.
If there is more than one hidden layer, it is
called a deep neural network. This is where
deep learning comes into play. Deep learning
became popular with the development of
modern AI architectures.
11. In short, the inputs go through neural networks and a prediction is
made. But, how to improve the prediction of a neural network? This
is where the backpropagation algorithm comes in. This algorithm
takes a neural network’s output error and then propagates that
error backward through the network. So, the weights are updated.
And this cycle continues until the best prediction is obtained.
12. A P P L I C A T I O N
ANNs power language translation, sentiment
analysis, and chatbots in NLP applications.
Natural Language Processing (NLP):
ANNs analyze medical images for disease detection,
assisting in diagnostics using techniques like computer-
aided diagnosis
Medical Diagnosis:
ANNs predict stock prices, market trends, and
assess financial risk by analyzing historical data.
Financial Forecasting:
ANNs contribute to the development of self-driving cars
by enabling object detection, lane keeping, and decision-
making.
Autonomous Vehicles:
ANNs are used for image and speech recognition,
enabling applications like facial recognition and voice
assistants.
Image and Speech Recognition:
12
13. ANNs often require large labeled datasets for
training, and obtaining such data can be challenging.
Training complex neural networks may demand
significant computational power.
Neural networks can overfit to training data,
capturing noise and hindering generalization to new
data.
Issues related to bias, fairness, and accountability
arise, especially in sensitive applications or high-
stakes decision-making.
CHALLENGES
ANNs require large amounts of labeled data for
effective training, and performance may suffer
with insufficient or biased datasets.
Training deep networks can be computationally
demanding
Models memorize training data noise rather
than learning general patterns.
Neural networks are often perceived as black-
box models, making it challenging to interpret
their decision-making processes.
LIMITATIONS
C H A L L E N G E S A N D L I M I T A T I O N S