TEST BANK For Principles of Anatomy and Physiology, 16th Edition by Gerard J....
Neural network
1. Deep Learning - Artificial Neural Networks Muhammad Aleem Siddiqui
2. What is an Artificial Neural Network ?
An artificial neural network is a mathematical function
which maps given input to a desired output.
It consists of following,
1. An input layer, x
2. An arbitrary amount of hidden layers
3. An output layer, ŷ
4. A set of weights and biases between each layer, W and b
5. A choice of activation function for each hidden layer, σ
Sigmoid activation function or ReLU - Rectified Linear Unit
3. What is an Activation Function ?
In artificial neural networks, the activation
function of a node defines the output of that
node given an input or set of inputs.
A standard computer chip circuit can be seen
as a digital network of activation functions that
can be "ON" or "OFF", depending on input.
4. Training The Neural Network
The output ŷ of a simple 2-layer Neural Network is
The right values for the weights and biases determines
the strength of the predictions. The process of fine-tuning
the weights and biases from the input data is known as
training the Neural Network
Each iteration of the training process consists of the
following steps:
1. Calculating the predicted output ŷ, known as
feedforward
2. Updating the weights and biases, known as
backpropagation
5. Feed Forward
In feedforward networks information only travels
forward in the network, first through the input nodes, then
through the hidden nodes, and finally through the output
nodes.
Calculating the predicted output ŷ is known as
feedforward.
2 Layered Feedforward Artificial Neural Network
6. Loss Function
There are many available loss functions, and the
nature of problem should determine the choice of loss
function to be used.
Most commonly used is, Sum-of-Squares Error
It is the sum of the difference between each predicted
value and the actual value. The difference is squared
so that we measure the absolute value of the
difference.
Our goal in training is to find the best set of weights
and biases that minimizes the loss function.
7. Back Propagation
Back-propagation is a technique of neural
network training. It is the method of fine-tuning
the weights of a neural network based on the
error rate.
This method helps to calculate the gradient of a
loss function with respects to all the weights in the
network.
After computing the derivative, the weights and
biases can simply updated by increasing/
reducing them. This is known as gradient descent.
8. 2 Layered Neural Network for Predicting Exclusive-OR
import numpy as np
epochs = 60000 # Number of iterations
inputLayerSize, hiddenLayerSize, outputLayerSize = 2, 3, 1
X = np.array([[0,0], [0,1], [1,0], [1,1]])
Y = np.array([ [0], [1], [1], [0]])
def sigmoid (x): return 1/(1 + np.exp(-x)) # activation function
def sigmoidPrime(x): return x * (1 - x) # derivative of sigmoid
np.random.seed(1) #Seeding / Initializing Random Values in same sequence every time.
Wh = np.random.uniform(size=(inputLayerSize, hiddenLayerSize))+1 # weights on layer inputs + bais
Wy = np.random.uniform(size=(hiddenLayerSize, outputLayerSize))+1 # weights on layer inputs + bais
for i in range(epochs):
H = sigmoid(np.dot(X, Wh)) # hidden layer results
Y_Hat = sigmoid(np.dot(H, Wy)) # output layer results
E = Y - Y_Hat # how much we missed (error)
dY = E * sigmoidPrime(Y_Hat) # delta Y
dH = dY.dot(Wy.T) * sigmoidPrime(H) # delta H
Wy += H.T.dot(dY) # update output layer weights
Wh += X.T.dot(dH) # update hidden layer weights
print(Y_Hat)