Your SlideShare is downloading. ×
Artificial neural networks in hydrology
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×

Introducing the official SlideShare app

Stunning, full-screen experience for iPhone and Android

Text the download link to your phone

Standard text messaging rates apply

Artificial neural networks in hydrology

1,185
views

Published on

Published in: Technology

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
1,185
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
68
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. ARTIFICIAL NEURAL NETWORKS IN HYDROLOGY BY THE ASCE TASK COMMITTEE A Paper Review
  • 2. DEVELOPMENT OF ANN
    • Information processing occurs at many single elements called nodes, also referred to as units, cells or neurons.
    • Signals are passed between nodes through connection links.
    • Each node typically applies a non-linear transformation called an activation function to its net input to determine its output signal.
  • 3. CLASSIFICATION OF NEURAL NETWORKS
    • 1) Based on Number of Layers
    • a) Single (Hopfield Nets)
    • b) Bilayer (Carpenter/Grossberg adaptive resonance networks)
    • c) Multilayered (most back propagation networks)
  • 4.
    • 2) Based on the direction of information flow and processing
    • a) Feed Forward Network – The nodes are arranged in layers, starting from the input layer and ending at the output layer ; there could be several hidden layers too.
    • b) Recurrent ANN – The information flows through nodes in both directions, ie from input to output and vice versa.
  • 5. MATHEMATICAL ASPECTS
    • The inputs form an input vector X=(x 1 , x 2 ….x n )
    • The sequence of weights leading to the node form a weight vector W j =(W 1j , W 2j …W nj ) where W ij represents the connection weight from the ith node in the preceding layer to this node.
    • Output y j is computed as follows :
  • 6. NETWORK TRAINING
    • In order for an ANN to generate an output vector that is as close as possible to the target vector, a training process, also called as learning, is used to find optimal weight matrices W and bias vectors V that minimize a predetermined error function that usually has the form
  • 7.
    • Where
    • ti = component of desired output T
    • yi = corresponding ANN output
    • P = number of training patterns
    • p = number of output nodes
    • Types of training
    • Supervised – requires an external teacher to teach. It requires a large number of inputs and the corresponding outputs.
    • Un-supervised – no teacher required. Only input data set is provided and the ANN automatically adapts its connection weights to cluster those input patterns into classes with similar properties.
  • 8. TRAINING ALGORITHMS
    • 1) Back - Propagation
    • It basically minimizes the network error function.
    • Each input pattern is passed through the network from input layer to output layer.
    • Network output is compared with desired target output and error is computed.
    • This error is propagated backward through the network to each node and correspondingly connection weights are adjusted.
  • 9.
    • Hence it involves two steps ; a forward pass in which effect of input is passed through the network to reach the output layer, after the error is computed the second steps begins in which the error generated at the output layer is propagated back to the input layer with the weights being modified.
  • 10.
    • Back-propagation is a first-order method based on the steepest gradient descent, with the direction vector being set equal to the negative of the gradient vector. Consequently, the solution often follows a zigzag path while trying to reach a minimum error position, which may slow down the training process. It is also possible for the training process to be trapped in the local minimum despite the use of a learning rate. (?)
  • 11. CONJUGATE GRADIENT ALGORITHMS
    • It does not proceed along the direction of the error gradient, but in an orthogonal direction.
    • This prevents the future steps from influencing the minimization achieved during the current step. (?)
  • 12. RADIAL BASIS FUNCTION
    • The hidden layer consists of a number of nodes and a parameter vector called a ‘‘center,’’ which can be considered the weight vector.
    • The standard Euclidean distance is used to measure how far an input vector is from the center.
    • For each node, the Euclidean distance between the center and the input vector of the network input is computed and transformed by a nonlinear function that determines the output of the nodes in the hidden layer.
  • 13.
    • The major task of RBF network design is to determine center c. The simplest and easiest way may be to choose the centers randomly from the training set. Or else the technique of clustering input training set into groups and choose the center of each group as the center.
    • After the center is determined, the connection weights wi between the hidden layer and output layer can be determined simply through ordinary back-propagation training.
    • The primary difference between the RBF network and back propagation is in the nature of the nonlinearities associated with hidden nodes. The nonlinearity in back-propagation is implemented by a fixed function such as a sigmoid. The RBF method, on the other hand, bases its nonlinearities on the data in the training set.
  • 14. CASCADE CORRELATION ALGORITHM
    • It starts with a minimal network without any hidden nodes and grows during the training by adding new hidden units one by one.
    • Once a new hidden node has been added to the network, its input-side weights are frozen.
    • The hidden nodes are trained in order to maximize the correlation between output of the nodes and output error.