SlideShare a Scribd company logo
1 of 38
‫الموصل‬ ‫جامعة‬
‫الهندسة‬ ‫كلية‬
‫الكهربائية‬ ‫الهندسة‬ ‫قسم‬
‫الدورة‬ ‫عنوان‬
‫االصطناعية‬ ‫العصبية‬ ‫الشبكة‬ ‫تطبيق‬
(
( ANN
‫في‬
‫القدرة‬ ‫نقل‬ ‫منظومة‬ ‫اعطال‬ ‫كشف‬
‫الكهربائية‬
‫الدورة‬ ‫على‬ ‫القائمين‬
‫الدكتور‬ ‫االستاذ‬
:
‫السماك‬ ‫نصر‬ ‫احمد‬
‫المدرس‬
:
‫النائب‬ ‫اسماعيل‬ ‫ابراهيم‬
‫المساعد‬ ‫المدرس‬
:
‫النقيب‬ ‫هللا‬ ‫خير‬ ‫كرم‬
1
Contents
 Artificial intelligence
 Biological (Real) Neural Network (BNN)
 Artificial neural networks (ANNs)
 Parameters and terminology for ANN
 Classification of neural networks
 Neural Network Perceptron
 Neural Representation of AND, OR (Perceptron Algorithm)
 Deep Learning System
 Simulation of Artificial neural network in matlab
 Reference
2
Artificial intelligence
Fig.1 the umbrella for Artificial intelligence
AI, is an umbrella term and refers to the science that studies
way to build computer algorithms that learn and solve problem
in a similar way as human cognitive function.
ML, is a subset of artificial intelligence (AI). It
refers to the set of algorithms that have the ability
to learn from data without being explicitly
programmed.
DL, is a subset of machine learning (ML). It
refers to a set of algorithms that try to mimic
human neural systems, also known as neural
networks.
3
Biological (Real) Neural Network (BNN)
The term ‘Neural’ has origin from the human (animal) nervous system’s basic functional unit ‘neuron’ or
nerve cells present in the brain and other parts of the human (animal) body. A neural network is a group
of algorithms that certify the underlying relationship in a set of data similar to the human brain.
Artificial neural networks (ANNs)
History of the ANNs stems from the 1940s, the decade of the first electronic computer. However, the first
important step took place in 1957 when Rosenblatt introduced the first concrete neural model, the
perceptron. Rosenblatt also took part in constructing the first successful neuro computer.
Neural networks, also known as artificial neural networks (ANNs) or simulated neural networks (SNNs),
are a subset of machine learning and are at the heart of deep learning algorithms. Their name and
structure are inspired by the human brain, mimicking the way that biological neurons signal to one
another.
Artificial neural networks (ANNs) are comprised of a node layers, containing an input layer, one or more
hidden layers, and an output layer. Each node, or artificial neuron, connects to another and has an
associated weight and threshold. If the output of any individual node is above the specified threshold
value, that node is activated, sending data to the next layer of the network. Otherwise, no data is passed
along to the next layer of the network.
4
Fig.2: (a) Biological neuron (b) Artificial neuron
5
Fig.3: Neuron networks: (a) brain; (b) neural network (c) neuron
connecting structure (d) neuron structure (e) neuron network
architecture
6
Parameters and terminology for ANN
 Input layer
 Hidden layer
 Output layer
 Epochs
 Weights
 Bias
 Activation function
 Forward and Backward Propagation
 Learning rate
7
Input Layer: First is the input layer. This layer will accept the data and pass it to the
rest of the network.
Hidden Layer: The second type of layer is called the hidden layer. Hidden layers are
either one or more in number for a neural network.
Output layer: The last type of layer is the output layer. The output layer holds the
result or the output of the problem.
Fig.4 : explain the input , hidden and output layers
8
Weights :control the signal (or the strength of the connection) between two neurons. In other
words, a weight decides how much influence the input will have on the output.
Biases: which are constant, are an additional input into the next layer that will always have the
value of 1.
Fig.5: the Weights and Biases in ANN
9
Epoch:
In terms of artificial neural networks, an epoch refers to one cycle through the full
training dataset. Usually, training a neural network takes more than a few epochs.
Learning rate :
In machine learning and statistics, the learning rate is a tuning parameter in an
optimization algorithm that determines the step size at each iteration while moving
toward a minimum of a loss function.
Fig.6: the effect value of Learning rate on the training ANN
10
Forward and Backward Propagation
Forward Propagation is the way to move from the Input layer (left) to the Output layer (right) in
the neural network. The process of moving from the right to left i.e backward from the Output to
the Input layer is called the Backward Propagation.
In the forward propagate stage, the data flows through the network to get the outputs. The loss
function is used to calculate the total error. Then, we use backward propagation algorithm to
calculate the gradient of the loss function with respect to each weight and bias.
Fig.7: the Forward and Backward Propagation in ANN
11
Activation functions in ANN
Fig.8: Types of Activation functions in ANN
12
Classification of neural networks
Neural network can be classified according to
1- Architecture : A pattern of connections between neurons.
 Single layer feed-forward
 Multi layer feed-forward
 Recurrent
2- Learning algorithm:
The supervised learning: input data is called training data and a known label .
The unsupervised learning: input data is not labeled and dose not have a known result.
Fig.9:Classification of Learning algorithm in Machine learning
13
Fig.10: The difference between The supervised learning and unsupervised learning
Fig.11: The difference between The classification and regression in supervised learning
14
Neural Network Perceptron
1) Single – Layer Perceptron: Used when data can be separated by only one line .
2) Multi – Layer Perceptron : Used when data can not be separated by only one line.
Fig..12 : Explain the Neural Network Perceptron classification
Where:
W=Weights for ANN, T=Target for output data ,Y= actual output ANN , ζ = Learning rate range (0 to 1)
15
Neural Representation of AND, OR (Perceptron
Algorithm)
understanding how the Perceptron works with Logic gates (AND, OR)
Note: The purpose of this article is NOT to mathematically explain how the neural
network updates the weights, but to explain the logic behind how the values are being
changed in simple terms.
First, we need to know that the Perceptron algorithm states that:
Prediction (y`) = 1 if Wx+b > 0 and 0 if Wx+b ≤ 0
Also, the steps in this method are very similar to how Neural Networks learn, which is
as follows;
1) Initialize weight values and bias
2) Forward Propagate
3) Check the error
4) Back propagate and Adjust weights and bias
16
Ex.1: What are the weights and bias for the AND Gate perceptron?
Fig.13: the truth table and distribution of values for AND Gate
 The data can be separated by a single line that does not need hidden layers
First, we need to understand that the output of an AND gate is 1 only if both inputs (in this case,
x1 and x2) are 1. initializing w1, w2, as 1 and b as –1, we get
So, following the steps listed above;
(0,1)
(0,0)
(1,1)
(1,0)
17
Because an error does not appear, we do not need to next Epoch
x1 x2 Bias w1 w2 W_bias Net ANN
Output
(Y)
Target
(T)
Error
0 0 1 1 1 -1 -1 0 0 0
0 1 1 1 1 -1 0 0 0 0
1 0 1 1 1 -1 0 0 0 0
1 1 1 1 1 -1 1 1 1 0
Epoch =1
18
Weights for final AND Gate - ANN = [ 1, 1 , - 1]
Number of Epoch = 1
Learning rate =0.1
Error = 0
X1
1
X2
1
- 1
1
1
Y
Bias
Inputs
Output
Activation
Function
Input Layer Output Layer
Fig. 14 :the structure AND Gate – ANN
19
Ex.2: What are the weights and bias for the OR Gate perceptron?
Fig.15: the truth table and distribution of values for OR Gate
 The data can be separated by a single line that does not need hidden layers
First, we need to understand that the output of an AND gate is 1 only if both inputs (in this case,
x1 and x2) are 1. initializing w1=0.1, w2=0.2 , as 1 and b = –0.2, we get
So, following the steps listed above;
(0,1)
(0,0)
(1,1)
(1,0)
20
Epoch = 1
Weights for next Epoch = [ 0.2, 0.3 , 0] use these as initial weights for next epoch
Epoch = 2
Stop training ANN because Error = 0 for all Epoch =2
x1 x2 bias w1 w2 w_bias Net ANN
Output
(Y)
Target
(T)
Error
0 0 1 0.1 0.2 - 0.2 - 0.2 0 0 0
0 1 1 0.1 0.2 - 0.2 0 0 1 1
1 0 1 0.1 0.3 - 0.1 0 0 1 1
1 1 1 0.2 0.3 0 0.5 1 1 0
x1 x2 bias w1 w2 w_bias Net ANN
Output
(Y)
Target
(T)
Error
0 0 1 0.2 0.3 0 0 0 0 0
0 1 1 0.2 0.3 0 0.3 1 1 0
1 0 1 0.2 0.3 0 0.2 1 1 0
1 1 1 0.2 0.3 0 0.5 1 1 0
21
Weights for final OR Gate - ANN = [ 0.2 , 0.3 , 0]
Number of Epoch = 2
Learning rate =0.1
Error = 0
Fig. 16: the structure OR Gate – ANN
X1
1
X2
1
0
0.2
0.3
Y
Bias
Inputs
Output
Activation
Function
Input Layer Output Layer
22
Ex.3: What are the weights and bias for the XOR Gate perceptron?
Fig.17: the truth table and distribution of values for XOR Gate
 Because the data cannot be separated by one line, we need to add a hidden
layer to the neural network structure.
The weights and Bias of the artificial neural network are found as shown in the
Flowchart in figure
(0,1)
(0,0)
(1,1)
(1,0)
23
Fig. 18 . Flowchart of neural network Algorithm that shows the typical performance
procedure of a neural network
24
Deep Learning System
A neural network with multiple hidden layers and multiple nodes in each hidden layer is known as
a deep learning system or a deep neural network. Deep learning is the development of deep
learning algorithms that can be used to train and predict output from complex data.
The word “deep” in Deep Learning refers to the number of hidden layers i.e. depth of the neural
network. Essentially, every neural network with more than three layers, that is, including the Input
Layer and Output Layer can be considered a Deep Learning Model.
Fig.19 : explain the difference Structural between (a) the neural network (b) the deep neural
network
25
1- Recurrent neural networks
A recurrent neural network (RNN) is a type of artificial neural network which uses sequential data
or time series data. These deep learning algorithms are commonly used for ordinal or temporal
problems, such as language translation, speech recognition, and image.
 Recurrent Neural Network Applications
A common example of Recurrent Neural Networks is machine translation. For example, a neural
network may take an input sentence in Spanish and translate it into a sentence in English. The
network determines the likelihood of each word in the output sentence based upon the word itself,
and the previous output sequence.
Fig.20 : Description of structural recurrent
neural networks
26
2- A convolutional neural network (CNN):
CNN is a deep learning neural network designed for processing structured arrays of data such as
images. A convolutional neural network is a feed-forward neural network, often with up to 20 or 30
layers.
Applications of Convolutional Neural Networks
Convolutional neural networks are most widely known for image analysis but they have also been
adapted for several applications in other areas of machine learning.
1) A self-driving car’s.
2) Text Classification
3) Objects detections
Fig.21 : Description of structural convolutional neural networks
27
3- EfficientNet neural network
In May 2019, two engineers from Google brain team named Mingxing Tan and Quoc V. Le
published a paper called “EfficientNet: Rethinking Model Scaling for Convolutional Neural
Networks”. The core idea of publication was about strategically scaling deep neural networks but
it also introduced a new family of neural nets, EfficientNets.
EfficientNet proposed scaling up CNN models to obtain better accuracy and efficiency in a much
more moral way .
EfficientNet uses a technique called compound coefficient to scale up models in a simple but
effective manner. Instead of randomly scaling up width, depth or resolution.
Depth: the number of layers.
Width: the number of neurons (and feature maps) at each layer
resolution : the size of the input image.
Applications of EfficientNet neural network :
Objects detections in image for multi scalling.
28
Fig. 22. Model Scaling. (a) is a baseline network example; (b)-(d) are conventional scaling that only
increases one dimension of network width, depth, or resolution. (e) is our proposed compound scaling
method that uniformly scales all three dimensions with a fixed ratio.
Fig. 23. Class Activation Map (CAM) for Models with different scaling methods
29
Simulation of Artificial neural network in matlab
In matlab can design Feed forward Multilayer perceptron ANN by two methods :
1- Script (M- File)
2- Graphical Interface Function GUI (nftool ) Neural Network Tool.
Workflow for Neural Network Design
To implement a Neural Network (design process), 7 steps must be followed:
1. Collect data (Load data source).
2. Neural Network creation.
3. Configure the network (selection of network architecture).
4. Initialize the weights and biases.
5. Train the network.
6. Validate the network (Testing and Performance evaluation).
7. Use the network.
30
1- Script (M- File) : using the code
The MATLAB commands used in the procedure are newff, train, and sim
1) newff create a feed-forward backpropagation network object and It also
automatically initializes the network.
Syntax:
 net = newff (PR, [S1 S2 …SNl], {TF1, TF2, …, TFNl}, BTF ,BLF,PF)
Description:
The function takes the following parameters
 PR - = Rx2 matrix of min and max values for R input elements.
 Si - Number of neurons (size) in the ith layer, i = 1,…, Nl.
 Nl - Number of layers.
 TFi - Transfer function of ith layer. Default is 'tansig' for hidden layers,
 and 'purelin' for output layer. The transfer functions TF{i} can be any
 differentiable transfer function such as TANSIG, LOGSIG, or PURELIN.
 BTF - Backpropagation training function, default = 'traingdx'.
 BLF - Backpropagation learning function, default = 'learngdm'.
 PF - Performance function, default = 'mse'.
And returns an N layer feed-forward backpropagation Network. newff uses random number generator in creating
the initial values for the network weights.
EX. 1: Design XOR Gate by ANN
31
The result:
32
Fig.24 The result XOR Gate ANN
33
2- Graphical Interface Function GUI (nftool ) Neural Network Tool.
Ex.2: Design Binary convert Decimal by ANN
Fig.25: Binary convert Decimal system
34
The Truth table for Binary convert Decimal
For training ANN:
In command window in matlab (nftool)
The input data = [D C B A]
The output data =[Y]
The steps training in next slide
D C B A Y
0 0 0 0 0
0 0 0 1 1
0 0 1 0 2
0 0 1 1 3
0 1 0 0 4
0 1 0 1 5
0 1 1 0 6
0 1 1 1 7
1 0 0 0 8
1 0 0 1 9
35
Fig.26: the training data for Binary convert Decimal by ANN
36
Fig.27: The results for Binary convert Decimal by ANN
37
Reference
1- Ivan Nunes da Silva, etc., Artificial Neural Networks A Practical Course, Springer International
Publishing Switzerland 2017, Publisher Springer Cham, DOI
https://doi.org/10.1007/978-3-319-43162-8 .
2- Phil Kim, MATLAB Deep Learning: With Machine Learning, Neural Networks and Artificial
Intelligence, Apress, ISBN 10: 1484228456.
3- Daniel Graupe, Principles of Artificial Neural Networks, 3rd, World Scientific Publishing
Company, ISBN 10: 9814522732.
4- P. J. Braspenning (auth.), etc., Artificial Neural Networks: An Introduction to ANN Theory and
Practice, Springer-Verlag Berlin Heidelberg, 1995, ISBN 10: 3540594884.
5- Howard Demuth, Mark Beale, Neural Network Toolbox, www.mathworks.com Web.
6- Matlab program help.
38

More Related Content

Similar to تطبيق الشبكة العصبية الاصطناعية (( ANN في كشف اعطال منظومة نقل القدرة الكهربائية

ACUMENS ON NEURAL NET AKG 20 7 23.pptx
ACUMENS ON NEURAL NET AKG 20 7 23.pptxACUMENS ON NEURAL NET AKG 20 7 23.pptx
ACUMENS ON NEURAL NET AKG 20 7 23.pptxgnans Kgnanshek
 
Supervised Learning
Supervised LearningSupervised Learning
Supervised Learningbutest
 
SOFT COMPUTERING TECHNICS -Unit 1
SOFT COMPUTERING TECHNICS -Unit 1SOFT COMPUTERING TECHNICS -Unit 1
SOFT COMPUTERING TECHNICS -Unit 1sravanthi computers
 
Artificial neural network paper
Artificial neural network paperArtificial neural network paper
Artificial neural network paperAkashRanjandas1
 
Artificial neural networks
Artificial neural networks Artificial neural networks
Artificial neural networks ShwethaShreeS
 
Modeling of neural image compression using gradient decent technology
Modeling of neural image compression using gradient decent technologyModeling of neural image compression using gradient decent technology
Modeling of neural image compression using gradient decent technologytheijes
 
Artificial Neural Networks (ANNs) focusing on the perceptron Algorithm.pptx
Artificial Neural Networks (ANNs) focusing on the perceptron Algorithm.pptxArtificial Neural Networks (ANNs) focusing on the perceptron Algorithm.pptx
Artificial Neural Networks (ANNs) focusing on the perceptron Algorithm.pptxMDYasin34
 
Artificial Neural Networks ppt.pptx for final sem cse
Artificial Neural Networks  ppt.pptx for final sem cseArtificial Neural Networks  ppt.pptx for final sem cse
Artificial Neural Networks ppt.pptx for final sem cseNaveenBhajantri1
 
Perceptron (neural network)
Perceptron (neural network)Perceptron (neural network)
Perceptron (neural network)EdutechLearners
 
DESIGN AND IMPLEMENTATION OF BINARY NEURAL NETWORK LEARNING WITH FUZZY CLUSTE...
DESIGN AND IMPLEMENTATION OF BINARY NEURAL NETWORK LEARNING WITH FUZZY CLUSTE...DESIGN AND IMPLEMENTATION OF BINARY NEURAL NETWORK LEARNING WITH FUZZY CLUSTE...
DESIGN AND IMPLEMENTATION OF BINARY NEURAL NETWORK LEARNING WITH FUZZY CLUSTE...cscpconf
 
Neural Networks Ver1
Neural  Networks  Ver1Neural  Networks  Ver1
Neural Networks Ver1ncct
 
data science course
data science coursedata science course
data science coursemarketer1234
 
machine learning training in bangalore
machine learning training in bangalore machine learning training in bangalore
machine learning training in bangalore kalojambhu
 
data science course in pune
data science course in punedata science course in pune
data science course in punemarketer1234
 
Data science certification in mumbai
Data science certification in mumbaiData science certification in mumbai
Data science certification in mumbaiprathyusha1234
 

Similar to تطبيق الشبكة العصبية الاصطناعية (( ANN في كشف اعطال منظومة نقل القدرة الكهربائية (20)

ACUMENS ON NEURAL NET AKG 20 7 23.pptx
ACUMENS ON NEURAL NET AKG 20 7 23.pptxACUMENS ON NEURAL NET AKG 20 7 23.pptx
ACUMENS ON NEURAL NET AKG 20 7 23.pptx
 
Supervised Learning
Supervised LearningSupervised Learning
Supervised Learning
 
SOFT COMPUTERING TECHNICS -Unit 1
SOFT COMPUTERING TECHNICS -Unit 1SOFT COMPUTERING TECHNICS -Unit 1
SOFT COMPUTERING TECHNICS -Unit 1
 
Neural Networks
Neural NetworksNeural Networks
Neural Networks
 
Artificial neural network paper
Artificial neural network paperArtificial neural network paper
Artificial neural network paper
 
MNN
MNNMNN
MNN
 
Artificial neural networks
Artificial neural networks Artificial neural networks
Artificial neural networks
 
20120140503023
2012014050302320120140503023
20120140503023
 
Modeling of neural image compression using gradient decent technology
Modeling of neural image compression using gradient decent technologyModeling of neural image compression using gradient decent technology
Modeling of neural image compression using gradient decent technology
 
Artificial Neural Networks (ANNs) focusing on the perceptron Algorithm.pptx
Artificial Neural Networks (ANNs) focusing on the perceptron Algorithm.pptxArtificial Neural Networks (ANNs) focusing on the perceptron Algorithm.pptx
Artificial Neural Networks (ANNs) focusing on the perceptron Algorithm.pptx
 
Artificial Neural Networks ppt.pptx for final sem cse
Artificial Neural Networks  ppt.pptx for final sem cseArtificial Neural Networks  ppt.pptx for final sem cse
Artificial Neural Networks ppt.pptx for final sem cse
 
Perceptron (neural network)
Perceptron (neural network)Perceptron (neural network)
Perceptron (neural network)
 
Perceptron
PerceptronPerceptron
Perceptron
 
DESIGN AND IMPLEMENTATION OF BINARY NEURAL NETWORK LEARNING WITH FUZZY CLUSTE...
DESIGN AND IMPLEMENTATION OF BINARY NEURAL NETWORK LEARNING WITH FUZZY CLUSTE...DESIGN AND IMPLEMENTATION OF BINARY NEURAL NETWORK LEARNING WITH FUZZY CLUSTE...
DESIGN AND IMPLEMENTATION OF BINARY NEURAL NETWORK LEARNING WITH FUZZY CLUSTE...
 
6
66
6
 
Neural Networks Ver1
Neural  Networks  Ver1Neural  Networks  Ver1
Neural Networks Ver1
 
data science course
data science coursedata science course
data science course
 
machine learning training in bangalore
machine learning training in bangalore machine learning training in bangalore
machine learning training in bangalore
 
data science course in pune
data science course in punedata science course in pune
data science course in pune
 
Data science certification in mumbai
Data science certification in mumbaiData science certification in mumbai
Data science certification in mumbai
 

Recently uploaded

Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...shyamraj55
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupFlorian Wilhelm
 
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek SchlawackFwdays
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsRizwan Syed
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Scott Keck-Warren
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr BaganFwdays
 
Story boards and shot lists for my a level piece
Story boards and shot lists for my a level pieceStory boards and shot lists for my a level piece
Story boards and shot lists for my a level piececharlottematthew16
 
Vertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsVertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsMiki Katsuragi
 
Pigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food ManufacturingPigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food ManufacturingPigging Solutions
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfAddepto
 
Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Enterprise Knowledge
 
CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):comworks
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubKalema Edgar
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024BookNet Canada
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLScyllaDB
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Mattias Andersson
 
AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsMemoori
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsSergiu Bodiu
 
APIForce Zurich 5 April Automation LPDG
APIForce Zurich 5 April  Automation LPDGAPIForce Zurich 5 April  Automation LPDG
APIForce Zurich 5 April Automation LPDGMarianaLemus7
 
SAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxSAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxNavinnSomaal
 

Recently uploaded (20)

Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
 
Streamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project SetupStreamlining Python Development: A Guide to a Modern Project Setup
Streamlining Python Development: A Guide to a Modern Project Setup
 
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
 
Scanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL CertsScanning the Internet for External Cloud Exposures via SSL Certs
Scanning the Internet for External Cloud Exposures via SSL Certs
 
Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024Advanced Test Driven-Development @ php[tek] 2024
Advanced Test Driven-Development @ php[tek] 2024
 
"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan"ML in Production",Oleksandr Bagan
"ML in Production",Oleksandr Bagan
 
Story boards and shot lists for my a level piece
Story boards and shot lists for my a level pieceStory boards and shot lists for my a level piece
Story boards and shot lists for my a level piece
 
Vertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering TipsVertex AI Gemini Prompt Engineering Tips
Vertex AI Gemini Prompt Engineering Tips
 
Pigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food ManufacturingPigging Solutions in Pet Food Manufacturing
Pigging Solutions in Pet Food Manufacturing
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdf
 
Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024
 
CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):CloudStudio User manual (basic edition):
CloudStudio User manual (basic edition):
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding Club
 
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
New from BookNet Canada for 2024: BNC CataList - Tech Forum 2024
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQL
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?
 
AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial Buildings
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platforms
 
APIForce Zurich 5 April Automation LPDG
APIForce Zurich 5 April  Automation LPDGAPIForce Zurich 5 April  Automation LPDG
APIForce Zurich 5 April Automation LPDG
 
SAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptxSAP Build Work Zone - Overview L2-L3.pptx
SAP Build Work Zone - Overview L2-L3.pptx
 

تطبيق الشبكة العصبية الاصطناعية (( ANN في كشف اعطال منظومة نقل القدرة الكهربائية

  • 1. ‫الموصل‬ ‫جامعة‬ ‫الهندسة‬ ‫كلية‬ ‫الكهربائية‬ ‫الهندسة‬ ‫قسم‬ ‫الدورة‬ ‫عنوان‬ ‫االصطناعية‬ ‫العصبية‬ ‫الشبكة‬ ‫تطبيق‬ ( ( ANN ‫في‬ ‫القدرة‬ ‫نقل‬ ‫منظومة‬ ‫اعطال‬ ‫كشف‬ ‫الكهربائية‬ ‫الدورة‬ ‫على‬ ‫القائمين‬ ‫الدكتور‬ ‫االستاذ‬ : ‫السماك‬ ‫نصر‬ ‫احمد‬ ‫المدرس‬ : ‫النائب‬ ‫اسماعيل‬ ‫ابراهيم‬ ‫المساعد‬ ‫المدرس‬ : ‫النقيب‬ ‫هللا‬ ‫خير‬ ‫كرم‬ 1
  • 2. Contents  Artificial intelligence  Biological (Real) Neural Network (BNN)  Artificial neural networks (ANNs)  Parameters and terminology for ANN  Classification of neural networks  Neural Network Perceptron  Neural Representation of AND, OR (Perceptron Algorithm)  Deep Learning System  Simulation of Artificial neural network in matlab  Reference 2
  • 3. Artificial intelligence Fig.1 the umbrella for Artificial intelligence AI, is an umbrella term and refers to the science that studies way to build computer algorithms that learn and solve problem in a similar way as human cognitive function. ML, is a subset of artificial intelligence (AI). It refers to the set of algorithms that have the ability to learn from data without being explicitly programmed. DL, is a subset of machine learning (ML). It refers to a set of algorithms that try to mimic human neural systems, also known as neural networks. 3
  • 4. Biological (Real) Neural Network (BNN) The term ‘Neural’ has origin from the human (animal) nervous system’s basic functional unit ‘neuron’ or nerve cells present in the brain and other parts of the human (animal) body. A neural network is a group of algorithms that certify the underlying relationship in a set of data similar to the human brain. Artificial neural networks (ANNs) History of the ANNs stems from the 1940s, the decade of the first electronic computer. However, the first important step took place in 1957 when Rosenblatt introduced the first concrete neural model, the perceptron. Rosenblatt also took part in constructing the first successful neuro computer. Neural networks, also known as artificial neural networks (ANNs) or simulated neural networks (SNNs), are a subset of machine learning and are at the heart of deep learning algorithms. Their name and structure are inspired by the human brain, mimicking the way that biological neurons signal to one another. Artificial neural networks (ANNs) are comprised of a node layers, containing an input layer, one or more hidden layers, and an output layer. Each node, or artificial neuron, connects to another and has an associated weight and threshold. If the output of any individual node is above the specified threshold value, that node is activated, sending data to the next layer of the network. Otherwise, no data is passed along to the next layer of the network. 4
  • 5. Fig.2: (a) Biological neuron (b) Artificial neuron 5
  • 6. Fig.3: Neuron networks: (a) brain; (b) neural network (c) neuron connecting structure (d) neuron structure (e) neuron network architecture 6
  • 7. Parameters and terminology for ANN  Input layer  Hidden layer  Output layer  Epochs  Weights  Bias  Activation function  Forward and Backward Propagation  Learning rate 7
  • 8. Input Layer: First is the input layer. This layer will accept the data and pass it to the rest of the network. Hidden Layer: The second type of layer is called the hidden layer. Hidden layers are either one or more in number for a neural network. Output layer: The last type of layer is the output layer. The output layer holds the result or the output of the problem. Fig.4 : explain the input , hidden and output layers 8
  • 9. Weights :control the signal (or the strength of the connection) between two neurons. In other words, a weight decides how much influence the input will have on the output. Biases: which are constant, are an additional input into the next layer that will always have the value of 1. Fig.5: the Weights and Biases in ANN 9
  • 10. Epoch: In terms of artificial neural networks, an epoch refers to one cycle through the full training dataset. Usually, training a neural network takes more than a few epochs. Learning rate : In machine learning and statistics, the learning rate is a tuning parameter in an optimization algorithm that determines the step size at each iteration while moving toward a minimum of a loss function. Fig.6: the effect value of Learning rate on the training ANN 10
  • 11. Forward and Backward Propagation Forward Propagation is the way to move from the Input layer (left) to the Output layer (right) in the neural network. The process of moving from the right to left i.e backward from the Output to the Input layer is called the Backward Propagation. In the forward propagate stage, the data flows through the network to get the outputs. The loss function is used to calculate the total error. Then, we use backward propagation algorithm to calculate the gradient of the loss function with respect to each weight and bias. Fig.7: the Forward and Backward Propagation in ANN 11
  • 12. Activation functions in ANN Fig.8: Types of Activation functions in ANN 12
  • 13. Classification of neural networks Neural network can be classified according to 1- Architecture : A pattern of connections between neurons.  Single layer feed-forward  Multi layer feed-forward  Recurrent 2- Learning algorithm: The supervised learning: input data is called training data and a known label . The unsupervised learning: input data is not labeled and dose not have a known result. Fig.9:Classification of Learning algorithm in Machine learning 13
  • 14. Fig.10: The difference between The supervised learning and unsupervised learning Fig.11: The difference between The classification and regression in supervised learning 14
  • 15. Neural Network Perceptron 1) Single – Layer Perceptron: Used when data can be separated by only one line . 2) Multi – Layer Perceptron : Used when data can not be separated by only one line. Fig..12 : Explain the Neural Network Perceptron classification Where: W=Weights for ANN, T=Target for output data ,Y= actual output ANN , ζ = Learning rate range (0 to 1) 15
  • 16. Neural Representation of AND, OR (Perceptron Algorithm) understanding how the Perceptron works with Logic gates (AND, OR) Note: The purpose of this article is NOT to mathematically explain how the neural network updates the weights, but to explain the logic behind how the values are being changed in simple terms. First, we need to know that the Perceptron algorithm states that: Prediction (y`) = 1 if Wx+b > 0 and 0 if Wx+b ≤ 0 Also, the steps in this method are very similar to how Neural Networks learn, which is as follows; 1) Initialize weight values and bias 2) Forward Propagate 3) Check the error 4) Back propagate and Adjust weights and bias 16
  • 17. Ex.1: What are the weights and bias for the AND Gate perceptron? Fig.13: the truth table and distribution of values for AND Gate  The data can be separated by a single line that does not need hidden layers First, we need to understand that the output of an AND gate is 1 only if both inputs (in this case, x1 and x2) are 1. initializing w1, w2, as 1 and b as –1, we get So, following the steps listed above; (0,1) (0,0) (1,1) (1,0) 17
  • 18. Because an error does not appear, we do not need to next Epoch x1 x2 Bias w1 w2 W_bias Net ANN Output (Y) Target (T) Error 0 0 1 1 1 -1 -1 0 0 0 0 1 1 1 1 -1 0 0 0 0 1 0 1 1 1 -1 0 0 0 0 1 1 1 1 1 -1 1 1 1 0 Epoch =1 18
  • 19. Weights for final AND Gate - ANN = [ 1, 1 , - 1] Number of Epoch = 1 Learning rate =0.1 Error = 0 X1 1 X2 1 - 1 1 1 Y Bias Inputs Output Activation Function Input Layer Output Layer Fig. 14 :the structure AND Gate – ANN 19
  • 20. Ex.2: What are the weights and bias for the OR Gate perceptron? Fig.15: the truth table and distribution of values for OR Gate  The data can be separated by a single line that does not need hidden layers First, we need to understand that the output of an AND gate is 1 only if both inputs (in this case, x1 and x2) are 1. initializing w1=0.1, w2=0.2 , as 1 and b = –0.2, we get So, following the steps listed above; (0,1) (0,0) (1,1) (1,0) 20
  • 21. Epoch = 1 Weights for next Epoch = [ 0.2, 0.3 , 0] use these as initial weights for next epoch Epoch = 2 Stop training ANN because Error = 0 for all Epoch =2 x1 x2 bias w1 w2 w_bias Net ANN Output (Y) Target (T) Error 0 0 1 0.1 0.2 - 0.2 - 0.2 0 0 0 0 1 1 0.1 0.2 - 0.2 0 0 1 1 1 0 1 0.1 0.3 - 0.1 0 0 1 1 1 1 1 0.2 0.3 0 0.5 1 1 0 x1 x2 bias w1 w2 w_bias Net ANN Output (Y) Target (T) Error 0 0 1 0.2 0.3 0 0 0 0 0 0 1 1 0.2 0.3 0 0.3 1 1 0 1 0 1 0.2 0.3 0 0.2 1 1 0 1 1 1 0.2 0.3 0 0.5 1 1 0 21
  • 22. Weights for final OR Gate - ANN = [ 0.2 , 0.3 , 0] Number of Epoch = 2 Learning rate =0.1 Error = 0 Fig. 16: the structure OR Gate – ANN X1 1 X2 1 0 0.2 0.3 Y Bias Inputs Output Activation Function Input Layer Output Layer 22
  • 23. Ex.3: What are the weights and bias for the XOR Gate perceptron? Fig.17: the truth table and distribution of values for XOR Gate  Because the data cannot be separated by one line, we need to add a hidden layer to the neural network structure. The weights and Bias of the artificial neural network are found as shown in the Flowchart in figure (0,1) (0,0) (1,1) (1,0) 23
  • 24. Fig. 18 . Flowchart of neural network Algorithm that shows the typical performance procedure of a neural network 24
  • 25. Deep Learning System A neural network with multiple hidden layers and multiple nodes in each hidden layer is known as a deep learning system or a deep neural network. Deep learning is the development of deep learning algorithms that can be used to train and predict output from complex data. The word “deep” in Deep Learning refers to the number of hidden layers i.e. depth of the neural network. Essentially, every neural network with more than three layers, that is, including the Input Layer and Output Layer can be considered a Deep Learning Model. Fig.19 : explain the difference Structural between (a) the neural network (b) the deep neural network 25
  • 26. 1- Recurrent neural networks A recurrent neural network (RNN) is a type of artificial neural network which uses sequential data or time series data. These deep learning algorithms are commonly used for ordinal or temporal problems, such as language translation, speech recognition, and image.  Recurrent Neural Network Applications A common example of Recurrent Neural Networks is machine translation. For example, a neural network may take an input sentence in Spanish and translate it into a sentence in English. The network determines the likelihood of each word in the output sentence based upon the word itself, and the previous output sequence. Fig.20 : Description of structural recurrent neural networks 26
  • 27. 2- A convolutional neural network (CNN): CNN is a deep learning neural network designed for processing structured arrays of data such as images. A convolutional neural network is a feed-forward neural network, often with up to 20 or 30 layers. Applications of Convolutional Neural Networks Convolutional neural networks are most widely known for image analysis but they have also been adapted for several applications in other areas of machine learning. 1) A self-driving car’s. 2) Text Classification 3) Objects detections Fig.21 : Description of structural convolutional neural networks 27
  • 28. 3- EfficientNet neural network In May 2019, two engineers from Google brain team named Mingxing Tan and Quoc V. Le published a paper called “EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks”. The core idea of publication was about strategically scaling deep neural networks but it also introduced a new family of neural nets, EfficientNets. EfficientNet proposed scaling up CNN models to obtain better accuracy and efficiency in a much more moral way . EfficientNet uses a technique called compound coefficient to scale up models in a simple but effective manner. Instead of randomly scaling up width, depth or resolution. Depth: the number of layers. Width: the number of neurons (and feature maps) at each layer resolution : the size of the input image. Applications of EfficientNet neural network : Objects detections in image for multi scalling. 28
  • 29. Fig. 22. Model Scaling. (a) is a baseline network example; (b)-(d) are conventional scaling that only increases one dimension of network width, depth, or resolution. (e) is our proposed compound scaling method that uniformly scales all three dimensions with a fixed ratio. Fig. 23. Class Activation Map (CAM) for Models with different scaling methods 29
  • 30. Simulation of Artificial neural network in matlab In matlab can design Feed forward Multilayer perceptron ANN by two methods : 1- Script (M- File) 2- Graphical Interface Function GUI (nftool ) Neural Network Tool. Workflow for Neural Network Design To implement a Neural Network (design process), 7 steps must be followed: 1. Collect data (Load data source). 2. Neural Network creation. 3. Configure the network (selection of network architecture). 4. Initialize the weights and biases. 5. Train the network. 6. Validate the network (Testing and Performance evaluation). 7. Use the network. 30
  • 31. 1- Script (M- File) : using the code The MATLAB commands used in the procedure are newff, train, and sim 1) newff create a feed-forward backpropagation network object and It also automatically initializes the network. Syntax:  net = newff (PR, [S1 S2 …SNl], {TF1, TF2, …, TFNl}, BTF ,BLF,PF) Description: The function takes the following parameters  PR - = Rx2 matrix of min and max values for R input elements.  Si - Number of neurons (size) in the ith layer, i = 1,…, Nl.  Nl - Number of layers.  TFi - Transfer function of ith layer. Default is 'tansig' for hidden layers,  and 'purelin' for output layer. The transfer functions TF{i} can be any  differentiable transfer function such as TANSIG, LOGSIG, or PURELIN.  BTF - Backpropagation training function, default = 'traingdx'.  BLF - Backpropagation learning function, default = 'learngdm'.  PF - Performance function, default = 'mse'. And returns an N layer feed-forward backpropagation Network. newff uses random number generator in creating the initial values for the network weights. EX. 1: Design XOR Gate by ANN 31
  • 33. Fig.24 The result XOR Gate ANN 33
  • 34. 2- Graphical Interface Function GUI (nftool ) Neural Network Tool. Ex.2: Design Binary convert Decimal by ANN Fig.25: Binary convert Decimal system 34
  • 35. The Truth table for Binary convert Decimal For training ANN: In command window in matlab (nftool) The input data = [D C B A] The output data =[Y] The steps training in next slide D C B A Y 0 0 0 0 0 0 0 0 1 1 0 0 1 0 2 0 0 1 1 3 0 1 0 0 4 0 1 0 1 5 0 1 1 0 6 0 1 1 1 7 1 0 0 0 8 1 0 0 1 9 35
  • 36. Fig.26: the training data for Binary convert Decimal by ANN 36
  • 37. Fig.27: The results for Binary convert Decimal by ANN 37
  • 38. Reference 1- Ivan Nunes da Silva, etc., Artificial Neural Networks A Practical Course, Springer International Publishing Switzerland 2017, Publisher Springer Cham, DOI https://doi.org/10.1007/978-3-319-43162-8 . 2- Phil Kim, MATLAB Deep Learning: With Machine Learning, Neural Networks and Artificial Intelligence, Apress, ISBN 10: 1484228456. 3- Daniel Graupe, Principles of Artificial Neural Networks, 3rd, World Scientific Publishing Company, ISBN 10: 9814522732. 4- P. J. Braspenning (auth.), etc., Artificial Neural Networks: An Introduction to ANN Theory and Practice, Springer-Verlag Berlin Heidelberg, 1995, ISBN 10: 3540594884. 5- Howard Demuth, Mark Beale, Neural Network Toolbox, www.mathworks.com Web. 6- Matlab program help. 38