Fault identification and distance protection using ANN
1. FAULT IDENTIFICATION AND
DISTANCE PROTECTION
Of Transmission Lines using Multilayer
Perceptron (MLP) based Artificial Neural
Network (ANN)
Sourav Behera
EE- B1, 1401106204
3. Distance protection is a name given to such protection
scheme whose action depends upon the distance between
fault and point of observation
4. relation between measured impedance, line impedance and
load impedance
𝑍 𝑚 = 𝑍 𝐿 + 𝑍𝑙𝑜𝑎𝑑
when fault occurs, let 𝑍 𝐹 be fault impedance of line
𝑍 𝑚 = 𝑍 𝐹
𝑍 𝐹 =
𝑍 𝐿 × 𝑙
𝐿
where ‘𝐿’ is the total length of line and ‘𝑙’ is fault distance
5. Terms related to distance protection
1. Directional impedance relay
2. Zone setting
3. Tripping time
6.
7. Comparison between setting impedance and line
impedance helps to select the zones
Trip signal is the AND logic of
setting, direction and trip time
Trip time settings for zone-1 is
practically 0 ms, for zone-2 is
350-500 ms, for zone-3 is 500-
800 ms and for zone-4 is 1-2 s
12. What do the extra layers gain you?
A perceptron (single unit) can learn any logic separable with
hyperplane. A Perceptron cannot represent XOR since it is
linearly separable
13. What do each of the layers do?
1st layer draws
linear boundaries 2nd layer
combines the
boundaries
3rd layer can generate
arbitrarily complex
boundaries and so
on…
14. Properties of MLP
• No connections within a layer
• No direct connections between input and output layers
• Fully connected between layers
• More the number of layers more is the learning
• Number of output units need not equal number of input
units
• Number of hidden units per layer can be more or less than
input or output units
• It works on feed forward learning or error back propagation
propagation learning
15. Why MLP in fault identification and protection?
A fault condition can be simulated and the results can be
used to train an ANN. These results can then be used to
detect a real time fault. Trained neural networks can
precisely help in early fault detection and diagnosis of
external faults
Input layer works as raw data comparator. The hidden
layer's job is to transform the inputs into something that
the output layer can use. The output layer transforms the
hidden layer activations into desired scale
16. How it works?
Neural Networks learn and attribute weights to the
connections between the different neurons each time the
network processes data
The layers are for analysing the data in an hierarchical way.
Hidden layers are part of the data processing layers in a
network This means the next time it comes across similar
condition, it would have learned the outcome