SlideShare a Scribd company logo
Unit 6 Application Of AI
Presented By : Tekendra Nath Yogi
Tekendranath@gmail.com
College Of Applied Business And Technology
Biological Neuron
• A nerve cell (neuron) is a special biological cell that processes
information.
• According to an estimation, there are huge number of neurons,
approximately 1011 with numerous interconnections,
approximately 1015.
2Presented By: Tekendra Nath Yogi
Biological Neuron
3Presented By: Tekendra Nath Yogi
Biological Neuron
• Working of a Biological Neuron
– As shown in the above diagram, a typical neuron consists of the following
four parts with the help of which we can explain its working:
– Dendrites: They are tree-like branches, responsible for receiving the
information from other neurons it is connected to. In other sense, we can
say that they are like the ears of neuron.
– Soma: It is the cell body of the neuron and is responsible for processing of
information, they have received from dendrites.
– Axon: It is just like a cable through which neurons send the information.
– Synapses: It is the connection between the axon and other neuron dendrites.
4Presented By: Tekendra Nath Yogi
Biological Neural Networks
• Neuron is a cell in brain whose principle function is the
collection, Processing, and dissemination of signals.
• Networks of such neurons is called neural network .
• Neural network is capable of information processing.
• It is also known as:
– Connectionism
– Parallel and distributed processing and
– neural computing.
5Presented By: Tekendra Nath Yogi
What is a Artificial Neural Network?
• An Artificial Neural Network (ANN) is an information processing
paradigm(model) that is inspired by the way biological nervous systems process
information.
• The key elements of this paradigm are:
– Nodes(units): Nodes represent a cell of neural network.
– Links: Links are directed arrows that show propagation of information from
one node to another node.
– Activation: Activations are inputs to or outputs from a unit.
– Weight: Each link has weight associated with it which determines strength of
the connection.
– Activation function: A function which is used to derive output activation
from the input activations to a given node is called activation function. 6Presented By: Tekendra Nath Yogi
Contd….
Presented By: Tekendra Nath Yogi 7
Why use neural networks?
• Ability to derive meaning from complicated or imprecise data.
• Other reasons for using the ANNs are as follows:
– Adaptive learning: An ability to learn how to do tasks based on the data given
for training or initial experience.
– Self-Organization: An ANN can create its own organization or representation
of the information it receives during learning time.
– Real Time Operation: ANN computations may be carried out in parallel
– Fault Tolerance via Redundant Information Coding: Partial destruction of a
network leads to the corresponding degradation of performance. However,
some network capabilities may be retained even with major network damage
8Presented By: Tekendra Nath Yogi
Neural networks versus conventional computers
• Parallelism is NNs characteristics, because the computations of its components
are largely independent of each other. Neural networks and conventional
algorithmic computers are not in competition but complement each other.
• Different approach used to problem solving is the main difference between
neural networks and conventional computers.
• Conventional computers use an algorithmic approach, for instance, the
conventional computer follows a set of instructions in order to solve a problem.
• Thus, the specific steps must be well-defined so that the computer can follow to
solve the problem.
• This rule restricts the problem solving ability of conventional computers. That
means the computer must be given exactly how to solve a problem.
9Presented By: Tekendra Nath Yogi
Neural networks versus conventional computers
• As for neural networks, information is processed in much a similar
way the human brain does.
• A NN consists of a large amounts of highly interconnected
processing elements--neurons which works in parallel to solve a
specific problem.
• Neural networks can learn by examples but they cannot be
programmed to perform a specific task like conventional computers.
• Moreover, the examples must be selected carefully otherwise it
might be functioning incorrectly or waste time.
• This problem arises because of no method found currently to testify
if the system is faulty or not.
10Presented By: Tekendra Nath Yogi
Basic model of ANN
• Processing of ANN depends upon the following three building
blocks:
– Network Architecture(interconnections)
– Adjustments of Weights or Learning rules
– Activation Functions
11Presented By: Tekendra Nath Yogi
Basic Models of ANN
Interconnections Learning rules Activation function
First artificial neurons: The McCulloch-Pitts model
• The McCulloch-Pitts model is an extremely simple artificial
neuron as shown in figure below:
12Presented By: Tekendra Nath Yogi
Contd..
• In the figure, the variables w1, w2, w3….. Wm are called "weights". x1, x2, x3,…
Xm represent the inputs.
• Net input can be calculated as a weighted sum of its inputs (actual input):
Net Input = x1w1 + x2w2 + x3w3 + ...+xmwm=
• Then check if (Net input < Threshold) or not (i.e., apply the activation function to
the Net Input to derive the output).
– If it is, then the output is made zero.
– Otherwise, it is made a one.
Note: The inputs can be either a zero or a one. And the output is a zero or a one.
13Presented By: Tekendra Nath Yogi
Activation functions
• It is also known as the transfer function.
• Activation function typically falls into one of three main categories:
– Linear activation function
• Piecewise , and
• Ramp
– Threshold activation function
• Step, and
• sign
– Sigmoid activation function
• Binary, and
• Bipolar
14Presented By: Tekendra Nath Yogi
Linear Activation
• Also known as identity activation function. For linear activation functions, the
output activity is proportional to the total weighted sum(Net Input).
– Output (Y)= mx + c, where m and c are constant
• Neurons with the linear function are often used for linear approximation.
15Presented By: Tekendra Nath Yogi
Contd..
• Linear activation functions are of two types:
– Piecewise, And
– Ramp
• Ramp Linear activation function:
– Also known as Rectifier, or Max.
– Output(Y) = Max(0, Net input)
Presented By: Tekendra Nath Yogi 16
Contd…
• Piecewise Linear Activation:
– Choose some xmin and xmax which is our "range".
– Everything less than this range will be 0, and everything greater than this
range will be 1.
– Anything else is linearly-interpolated between.
– Here x is net input, and
– f(x) is output
Presented By: Tekendra Nath Yogi 17
• Also known as hard limit functions. For threshold activation functions, the
output are set at one of two levels, depending on whether the Net_input is
greater than or less than some threshold value.
• Threshold activation functions are of two types:
– Step activation function ,and
– sign activation function.
• hard limit functions, are often used in decision-making neurons for
classification and pattern recognition tasks.
18Presented By: Tekendra Nath Yogi
Threshold(Unit ) activation Function
output ( y)
19Presented By: Tekendra Nath Yogi
Step(Binary) activation
Sign(Bipolar) activation
• Output(Y)
Presented By: Tekendra Nath Yogi 20
• For sigmoid activation functions, the output varies
continuously but not linearly as the input changes.
21Presented By: Tekendra Nath Yogi
Sigmoid Activation Function
Sigmoid Activation Function
• It is of two type as follows:
Binary sigmoid function:
This activation function performs input editing between 0 and 1. It is
positive in nature. It is always bounded, which means its output
cannot be less than 0 and more than 1. It is also strictly increasing in
nature, which means more the input higher would be the output. It can
be defined as:
output (Y)
22Presented By: Tekendra Nath Yogi
Sigmoid Activation Function
• Bipolar sigmoid function:
– This activation function performs input editing between -1 and 1. It can be
positive or negative in nature. It is always bounded, which means its output
cannot be less than -1 and more than 1. It is also strictly increasing in nature
like sigmoid function. It can be defined as
Output(Y)
23Presented By: Tekendra Nath Yogi
First artificial neurons: The McCulloch-Pitts model
• The McCulloch-Pitts model is an extremely simple artificial
neuron as shown in figure below:
24Presented By: Tekendra Nath Yogi
Contd..
• In the figure, the variables w1, w2, w3….. Wm are called "weights". x1, x2, x3,…
Xm represent the inputs.
• Net input is calculated as a weighted sum of its inputs:
Net Input = x1w1 + x2w2 + x3w3 + ...+XmWm=
• Then check if (Net input < Threshold) or not (i.e., apply the activation
function to the Net Input to derive the output).
– If it is, then the output is made zero.
– Otherwise, it is made a one.
Note: The inputs can be either a zero or a one. And the output is a zero or a
one.
25Presented By: Tekendra Nath Yogi
Logic Gates with MP Neurons
• We can use McCulloch-Pitts neurons to implement the basic logic
gates.
• All we need to do is find the appropriate connection weights and
neuron thresholds to produce the right outputs for each set of
inputs.
Realization of AND gate
• Looking at the logic table for the A^B, we can see that we
only want the neuron to output a 1 when both inputs are
activated.
27Presented By: Tekendra Nath Yogi
Contd..
• To do this, we want the sum of both inputs to be greater than the threshold,
but each input alone must be lower than the threshold.
• Let's use a threshold of 1 (simple and convenient!). So, now we need to
choose the weights according to the constraints :
– E.g., 0.6 and 0.6
• With these weights, individual activation of either input A or B will not
exceed the threshold, while the sum of the two will be 1.2, which exceeds
the threshold and causes the neuron to fire. Here is a diagram:
28Presented By: Tekendra Nath Yogi
Contd…
• Here, letter theta denote the threshold.
• A and B are the two inputs. They will each be set to 1 or 0
depending upon the truth of their proposition.
• The red neuron is our decision neuron. If the sum of the
weighted inputs is greater than the threshold, this will
output a 1, otherwise it will output 0 .
• So, to test it by hand, we can try setting A and B to the
different values in the truth table and seeing if the decision
neuron's output matches the A^B column:
29Presented By: Tekendra Nath Yogi
Contd..
• If A=0 & B=0 --> 0*0.6 + 0*0.6 = 0. This is not greater
than the threshold of 1, so the output = 0. Good!
• If A=0 & B=1 --> 0*0.6 + 1*0.6 = 0.6. This is not greater
than the threshold, so the output = 0. Good.
• If A=1 & B=0 --> exactly the same as above. Good.
• If A=1 & B=1 --> 1*0.6 + 1*0.6 = 1.2. This exceeds the
threshold, so the output = 1. Good.
30Presented By: Tekendra Nath Yogi
Realization of OR Gate
• Logic table for the OR GATE is as shown in below:
• In this case, we want the output to be 1 when either or both of
the inputs, A and B, are active, but 0 when both of the inputs
are 0.
31Presented By: Tekendra Nath Yogi
Contd..
• This is simple enough. If we make each synapse greater than the threshold,
then it'll fire whenever there is any activity in either or both of A and B.
This is shown in figure below. Synaptic values of 1.1 are sufficient to
surpass the threshold of 1 whenever their respective input is active.
32Presented By: Tekendra Nath Yogi
Realization of NOT Gate
• Logic table for the NOT gate is as shown in table below:
• The NOT operator simply negates the input, When A is true
(value 1), ¬A is false (value 0) and vice-versa.
33Presented By: Tekendra Nath Yogi
A ¬ A
0 1
1 0
Contd...
• This is a little tricky. We have to change a 1 to a 0 - this is easy, just
make sure that the input doesn't exceed the threshold.
• However, we also have to change a 0 to a 1 - how can we do this?
• The answer is to think of our decision neuron as a tonic neuron - one
whose natural state is active.
• To make this, all we do is set the threshold lower than 0, so even
when it receives no input, it still exceeds the threshold.
• In fact, we have to set the synapse to a negative number (here use
-1) and the threshold to some number between that and 0 ( use -
0.5).
34Presented By: Tekendra Nath Yogi
Contd..
• The decision neuron shown below, with threshold -0.5 and
synapse weight -1, will reverse the input:
A = 1 --> output = 0
A = 0 --> output = 1
35Presented By: Tekendra Nath Yogi
Realization of XOR Gate
• XOR (exclusive OR) operator actually put a spanner in the
works of neural network research for a long time because it is
not possible to create an XOR gate with a single neuron, or
even a single layer of neurons - we need to have two layers.
• The truth table for the XOR gate is as shown below:
36Presented By: Tekendra Nath Yogi
Contd..
• Exclusive OR means that we want a truth value of 1 either when A is 1, or
when B is 1(i.e. A and B have different truth value), but not when both A
and B are 1 or 0(i.e., both have same truth value).
• The truth table on the above shows this.
• To use a real world example, in the question "would you like tea or
coffee?", the "or" is actually and exclusive or, because the person is
offering you one or the other, but not both. Contrast this with "would you
like milk or sugar?". In this case you can have milk, or sugar, or both.
37Presented By: Tekendra Nath Yogi
Contd..
• The only way to solve this problem is to have a bunch of neurons working
together. start by breaking down the XOR operation into a number of
simpler logical functions:
• A xor B = (AvB) ^ ¬(A^B)
• This line of logic contains three important operations: an OR operator in
the brackets on the left, an AND operator in the brackets on the right, and
another AND operator in the middle.
• We can create a neuron for each of these operations, and stick them
together like this:
38Presented By: Tekendra Nath Yogi
Contd..
• The upper of the two red neurons in the first layer has two inputs with synaptic
weights of 0.6 each and a threshold of 1, exactly the same as the AND gate we made
earlier. This is the AND function in the brackets on the right of the formula wrote
earlier.
• Notice that it is connected to the output neuron with a negative synaptic weight (-
2). This accounts for the NOT operator that precedes the brackets on the right hand
side.
• The lower of the two red neurons in the first layer has two synaptic weights of 1.1
and a threshold of 1, just like the OR gate we made earlier. This neuron is doing the
job of the OR operator in the brackets on the left of the formula.
• The output neuron is performing another AND operation - the one in the middle of
the formula. Practically, this output neuron is active whenever one of the inputs, A
or B is on, but it is overpowered by the inhibition of the upper neuron in cases when
both A and B are on. 39Presented By: Tekendra Nath Yogi
Learning in Neural Networks:
• Learning:
– Learning in neural networks is carried out by adjusting the connection weights
among neurons.
– There is no algorithm that determines how the weights should be assigned in
order to solve specific problems. Hence, the weights are determined by a
learning process
• Learning may be classified into two categories:
– Supervised Learning
– Unsupervised Learning
40Presented By: Tekendra Nath Yogi
1) Supervised Learning:
– In supervised learning, the network is presented with inputs together with the target
(teacher signal) outputs.
– Then, the neural network tries to produce an output as close as possible to the target
output by adjusting the values of internal weights.
– The most common supervised learning method is the “error correction method”.
• Neural networks are trained with this method in order to reduce the error (difference between the
network's output and the desired output) to zero.
41Presented By: Tekendra Nath Yogi
Learning in Neural Networks:
2) Unsupervised Learning:
– In unsupervised learning, there is no teacher (target signal/output) from
outside and the network adjusts its weights in response to only the input
patterns
– A typical example of unsupervised learning is Hebbian learning.
42Presented By: Tekendra Nath Yogi
Learning in Neural Networks:
Network Architecture
1) Feed-forward networks:
– Feed-forward ANNs allow signals to travel one way only; from input to
output.
– There is no feedback (loops) i.e. the output of any layer does not affect
that same layer.
– Feed-forward ANNs tend to be straight forward networks that associate
inputs with outputs.
43Presented By: Tekendra Nath Yogi
Types of Feed Forward Neural Network:
a) Single-layer neural networks
A neural network in which all the inputs connected directly to
the outputs is called a single-layer neural network.
44Presented By: Tekendra Nath Yogi
Types of Feed Forward Neural Network:
a) Single-layer Feed Forward neural networks :
Two types:
– Perceptron, and
– ADLINE
45Presented By: Tekendra Nath Yogi
Perceptron
• Developed by Frank Rosenblatt by using McCulloch and Pitts
model, perceptron is the basic operational unit of artificial neural
networks.
• It employs supervised learning rule and is able to classify the data
into two classes.
• Operational characteristics of the perceptron:
– It consists of a single neuron with an arbitrary number of inputs along with
adjustable weights, but the output of the neuron is 1 or -1 depending upon
the input. It also consists of a bias whose weight is always 1.
• Following figure gives a schematic representation of the perceptron.
46Presented By: Tekendra Nath Yogi
Perceptron
• Following figure gives a schematic representation of the perceptron:
47Presented By: Tekendra Nath Yogi
Perceptron
• Perceptron thus has the following three basic elements:
– Links: It would have a set of connection links, which carries a weight
including a bias always having weight 1.
– Adder: It adds the input after they are multiplied with their respective
weights. It also add up the bias.
– Activation function: It limits the output of neuron. The most basic
activation function is a sign function that has two possible outputs. This
function returns 1, if the input is positive, and -1 for any negative input.
48Presented By: Tekendra Nath Yogi
Adaptive Linear Neuron (ADALINE)
• ADALINE which stands for Adaptive Linear Neuron, is a
network having a single linear unit.
• It was developed by Widrow and Hoff in 1960. Some important
points about ADALINE are as follows:
– It uses bipolar activation function.
– It uses delta rule for training to minimize the Mean-Squared Error (MSE)
between the actual output and the desired/target output.
– The weights and the bias are adjustable.
49Presented By: Tekendra Nath Yogi
Adaptive Linear Neuron (ADALINE)
• Architecture :
– The basic structure of ADALINE is similar to perceptron having an extra
feedback loop with the help of which the calculated output is compared
with the desired/target output.
– After comparison on the basis of training algorithm, the weights and bias
will be updated.
50Presented By: Tekendra Nath Yogi
b) Multilayer neural networks
The neural network which contains input layers, output layers and some
hidden layers also is called multilayer neural network.
51Presented By: Tekendra Nath Yogi
Types of Feed Forward Neural Network:
Network Architectures
2) Feedback networks (Recurrent networks:)
• Feedback networks can have signals traveling in both directions by
introducing loops in the network.
– very powerful
– extremely complicated.
– dynamic: Their 'state' is changing continuously until they reach an equilibrium
point.
• also known as interactive or recurrent.
52Presented By: Tekendra Nath Yogi
Applications of Neural Network
• Speech recognition
• Optical character recognition
• Face Recognition
• Pronunciation (NETtalk)
• Stock-market prediction
• Navigation of a car
• Signal processing/Communication
• Imaging/Vision
• ….
53Presented By: Tekendra Nath Yogi
Natural Language processing
• What is Natural language:
– Natural language refers to the way we, humans,
communicate with each other.
– English, Spanish, French, and Nepali are all examples of a
natural language.
– Natural communication takes place in the form of speech
and text.
54Presented By: Tekendra Nath Yogi
What is Natural Language processing
• Natural language processing is a technology which involves converting
spoken or written human language into a form which can be processed by
computers, and vice versa.
• The goal of NLP is to make interactions between computers and humans
feel exactly like interactions between humans and humans.
55Presented By: Tekendra Nath Yogi
Components of NLP
– Because computers operate on artificial languages, they are unable to
understand natural language. This is the problem that NLP solves.
– With NLP, a computer is able to listen to a natural language being
spoken by a person, understand the meaning of it, and then if needed,
respond to it by generating natural language to communicate back to
the person.
• There are two components of NLP as given
– Natural Language Understanding (NLU)
– Natural Language Generation (NLG)
56Presented By: Tekendra Nath Yogi
Contd…
57Presented By: Tekendra Nath Yogi
Natural language Understanding(NLU):
• The most difficult part of NLP is understanding, or providing meaning to the
natural language that the computer received.
• To understand a language a system should have understanding about:
– structures of the language including what the words are and how they
combine into phrases and sentences.
– the meanings of the words and how they contribute to the meanings of the
sentence and
– context within which they are being used
58Presented By: Tekendra Nath Yogi
Natural language Understanding:
• But, Developing programs that understand a natural language is a
difficult and challenging task in the AI due to the following reasons:
– Natural languages are large: They contain infinity of different
sentences. No matter how many sentences a person has heard or
seen, new ones can always be produced.
– much ambiguity in a natural language:
• Many words have several meanings such as can, bear, bank
etc, and
• sentences have different meanings in different contexts.
– Example :- a can of juice. I can do it.
59Presented By: Tekendra Nath Yogi
Natural Language Generation(NLG)
• NLG is much simpler to accomplish. NLG translates a
computer’s artificial language into text, and can also go a step
further by translating that text into audible speech with text-to-
speech.
60Presented By: Tekendra Nath Yogi
NLG v/s NLU
• NLG may be viewed as the opposite NLU.
• The difference can be :
– in natural language understanding the system needs to
disambiguate the input sentence to produce the machine
representation language,.
– in NLG the system needs to make decisions about how to put
a concept into words.
– The NLU is harder than NLG.
Steps in Natural Language processing
• There are general five steps as shown in figure below:
62Presented By: Tekendra Nath Yogi
Contd..
• Lexical Analysis:
– It involves identifying and analyzing the structure of words.
– Lexicon of a language means the collection of words and phrases in a
language.
– Lexical analysis is dividing the whole chunk of text into paragraphs,
sentences, and words.
63Presented By: Tekendra Nath Yogi
Contd..
• Syntactic Analysis (Parsing):
– It involves analysis of words in the sentence for grammar and
arranging words in a manner that shows the relationship among the
words.
– The sentence such as “The school goes to boy” is rejected by English
syntactic analyzer.
64Presented By: Tekendra Nath Yogi
Contd…
• Semantic Analysis:
– It draws the exact meaning or the dictionary meaning from the text.
– The text is checked for meaningfulness.
– It is done by mapping syntactic structures and objects in the task
domain.
– The semantic analyzer disregards sentence such as “hot ice-cream”.
65Presented By: Tekendra Nath Yogi
Contd…
• Discourse Integration:
– The meaning of any sentence depends upon the meaning of the
sentence just before it.
– In addition, it also brings about the meaning of immediately
succeeding sentence.
66Presented By: Tekendra Nath Yogi
Contd..
• Pragmatic Analysis:
– During this, what was said is re-interpreted on what it actually meant.
– It involves deriving those aspects of language which require real world
knowledge.
67Presented By: Tekendra Nath Yogi
Applications of NLP
• Text-to-Speech & Speech recognition
• Information Retrieval
• Information Extraction
• Document Classification
• Automatic Summarization
• Machine Translation
• Story understanding systems
• Question answering
• Spelling correction
• Caption Generation
68Presented By: Tekendra Nath Yogi
Assignment
• Differentiate between connectionist and Symbolic AI.
• What is the difference between human intelligence and machine intelligence ?
• Explain McCulloch-Pitts model of neuron.
• Compare the performance of a computer and that of a biological neural network in terms of speed of
processing, size and complexity, storage, fault tolerance and control mechanism.
• How does an artificial neural network model the brain? Describe two major classes of learning paradigms:
supervised learning and unsupervised (self-organised) learning. What are the features that distinguish these
two paradigms from each other?
• Construct a multilayer perceptron with an input layer of six neurons, a hidden layer of four neuron and an
output layer of two neurons.
• Demonstrate perceptron learning of the binary logic function OR.
• Why natural language processing is required in AI System? Differentiate between NLU and NLG.
Presented By: Tekendra Nath Yogi 69
Thank You !
Presented By: Tekendra Nath Yogi 70

More Related Content

What's hot

Natural Language Processing seminar review
Natural Language Processing seminar review Natural Language Processing seminar review
Natural Language Processing seminar review
Jayneel Vora
 
Recurrent Neural Network (RNN) | RNN LSTM Tutorial | Deep Learning Course | S...
Recurrent Neural Network (RNN) | RNN LSTM Tutorial | Deep Learning Course | S...Recurrent Neural Network (RNN) | RNN LSTM Tutorial | Deep Learning Course | S...
Recurrent Neural Network (RNN) | RNN LSTM Tutorial | Deep Learning Course | S...
Simplilearn
 
Game playing in AI
Game playing in AIGame playing in AI
Game playing in AI
Dr. C.V. Suresh Babu
 
Intro To Machine Learning in Python
Intro To Machine Learning in PythonIntro To Machine Learning in Python
Intro To Machine Learning in Python
Russel Mahmud
 
Knowledge representation in AI
Knowledge representation in AIKnowledge representation in AI
Knowledge representation in AI
Vishal Singh
 
Recurrent Neural Networks (RNN) | RNN LSTM | Deep Learning Tutorial | Tensorf...
Recurrent Neural Networks (RNN) | RNN LSTM | Deep Learning Tutorial | Tensorf...Recurrent Neural Networks (RNN) | RNN LSTM | Deep Learning Tutorial | Tensorf...
Recurrent Neural Networks (RNN) | RNN LSTM | Deep Learning Tutorial | Tensorf...
Edureka!
 
Introduction to Transformer Model
Introduction to Transformer ModelIntroduction to Transformer Model
Introduction to Transformer Model
Nuwan Sriyantha Bandara
 
Natural Language Processing
Natural Language ProcessingNatural Language Processing
Natural Language Processing
Mariana Soffer
 
Deep neural networks
Deep neural networksDeep neural networks
Deep neural networks
Si Haem
 
Supervised vs Unsupervised vs Reinforcement Learning | Edureka
Supervised vs Unsupervised vs Reinforcement Learning | EdurekaSupervised vs Unsupervised vs Reinforcement Learning | Edureka
Supervised vs Unsupervised vs Reinforcement Learning | Edureka
Edureka!
 
0 1 knapsack using branch and bound
0 1 knapsack using branch and bound0 1 knapsack using branch and bound
0 1 knapsack using branch and bound
Abhishek Singh
 
Recurrent Neural Networks, LSTM and GRU
Recurrent Neural Networks, LSTM and GRURecurrent Neural Networks, LSTM and GRU
Recurrent Neural Networks, LSTM and GRU
ananth
 
computer Graphics
computer Graphics computer Graphics
computer Graphics
Rozi khan
 
Introduction to Transformers for NLP - Olga Petrova
Introduction to Transformers for NLP - Olga PetrovaIntroduction to Transformers for NLP - Olga Petrova
Introduction to Transformers for NLP - Olga Petrova
Alexey Grigorev
 
Natural language processing (nlp)
Natural language processing (nlp)Natural language processing (nlp)
Natural language processing (nlp)
Kuppusamy P
 
What Is Deep Learning? | Introduction to Deep Learning | Deep Learning Tutori...
What Is Deep Learning? | Introduction to Deep Learning | Deep Learning Tutori...What Is Deep Learning? | Introduction to Deep Learning | Deep Learning Tutori...
What Is Deep Learning? | Introduction to Deep Learning | Deep Learning Tutori...
Simplilearn
 
Neural network
Neural networkNeural network
Neural network
Silicon
 
Natural Language Processing
Natural Language ProcessingNatural Language Processing
Natural Language Processing
VeenaSKumar2
 
Unit2: Agents and Environment
Unit2: Agents and EnvironmentUnit2: Agents and Environment
Unit2: Agents and Environment
Tekendra Nath Yogi
 
Notes on attention mechanism
Notes on attention mechanismNotes on attention mechanism
Notes on attention mechanism
Khang Pham
 

What's hot (20)

Natural Language Processing seminar review
Natural Language Processing seminar review Natural Language Processing seminar review
Natural Language Processing seminar review
 
Recurrent Neural Network (RNN) | RNN LSTM Tutorial | Deep Learning Course | S...
Recurrent Neural Network (RNN) | RNN LSTM Tutorial | Deep Learning Course | S...Recurrent Neural Network (RNN) | RNN LSTM Tutorial | Deep Learning Course | S...
Recurrent Neural Network (RNN) | RNN LSTM Tutorial | Deep Learning Course | S...
 
Game playing in AI
Game playing in AIGame playing in AI
Game playing in AI
 
Intro To Machine Learning in Python
Intro To Machine Learning in PythonIntro To Machine Learning in Python
Intro To Machine Learning in Python
 
Knowledge representation in AI
Knowledge representation in AIKnowledge representation in AI
Knowledge representation in AI
 
Recurrent Neural Networks (RNN) | RNN LSTM | Deep Learning Tutorial | Tensorf...
Recurrent Neural Networks (RNN) | RNN LSTM | Deep Learning Tutorial | Tensorf...Recurrent Neural Networks (RNN) | RNN LSTM | Deep Learning Tutorial | Tensorf...
Recurrent Neural Networks (RNN) | RNN LSTM | Deep Learning Tutorial | Tensorf...
 
Introduction to Transformer Model
Introduction to Transformer ModelIntroduction to Transformer Model
Introduction to Transformer Model
 
Natural Language Processing
Natural Language ProcessingNatural Language Processing
Natural Language Processing
 
Deep neural networks
Deep neural networksDeep neural networks
Deep neural networks
 
Supervised vs Unsupervised vs Reinforcement Learning | Edureka
Supervised vs Unsupervised vs Reinforcement Learning | EdurekaSupervised vs Unsupervised vs Reinforcement Learning | Edureka
Supervised vs Unsupervised vs Reinforcement Learning | Edureka
 
0 1 knapsack using branch and bound
0 1 knapsack using branch and bound0 1 knapsack using branch and bound
0 1 knapsack using branch and bound
 
Recurrent Neural Networks, LSTM and GRU
Recurrent Neural Networks, LSTM and GRURecurrent Neural Networks, LSTM and GRU
Recurrent Neural Networks, LSTM and GRU
 
computer Graphics
computer Graphics computer Graphics
computer Graphics
 
Introduction to Transformers for NLP - Olga Petrova
Introduction to Transformers for NLP - Olga PetrovaIntroduction to Transformers for NLP - Olga Petrova
Introduction to Transformers for NLP - Olga Petrova
 
Natural language processing (nlp)
Natural language processing (nlp)Natural language processing (nlp)
Natural language processing (nlp)
 
What Is Deep Learning? | Introduction to Deep Learning | Deep Learning Tutori...
What Is Deep Learning? | Introduction to Deep Learning | Deep Learning Tutori...What Is Deep Learning? | Introduction to Deep Learning | Deep Learning Tutori...
What Is Deep Learning? | Introduction to Deep Learning | Deep Learning Tutori...
 
Neural network
Neural networkNeural network
Neural network
 
Natural Language Processing
Natural Language ProcessingNatural Language Processing
Natural Language Processing
 
Unit2: Agents and Environment
Unit2: Agents and EnvironmentUnit2: Agents and Environment
Unit2: Agents and Environment
 
Notes on attention mechanism
Notes on attention mechanismNotes on attention mechanism
Notes on attention mechanism
 

Similar to Unit 6: Application of AI

UNIT 5-ANN.ppt
UNIT 5-ANN.pptUNIT 5-ANN.ppt
UNIT 5-ANN.ppt
Sivam Chinna
 
Neural-Networks.ppt
Neural-Networks.pptNeural-Networks.ppt
Neural-Networks.ppt
RINUSATHYAN
 
Deep learning
Deep learningDeep learning
Deep learning
Kuppusamy P
 
2011 0480.neural-networks
2011 0480.neural-networks2011 0480.neural-networks
2011 0480.neural-networks
Parneet Kaur
 
Neural
NeuralNeural
Neural
Vaibhav Shah
 
Multilayer Perceptron Neural Network MLP
Multilayer Perceptron Neural Network MLPMultilayer Perceptron Neural Network MLP
Multilayer Perceptron Neural Network MLP
Abdullah al Mamun
 
ANN.pptx
ANN.pptxANN.pptx
ANN.pptx
AROCKIAJAYAIECW
 
Machine learning by using python lesson 2 Neural Networks By Professor Lili S...
Machine learning by using python lesson 2 Neural Networks By Professor Lili S...Machine learning by using python lesson 2 Neural Networks By Professor Lili S...
Machine learning by using python lesson 2 Neural Networks By Professor Lili S...
Professor Lili Saghafi
 
Acem neuralnetworks
Acem neuralnetworksAcem neuralnetworks
Acem neuralnetworks
Aastha Kohli
 
Artificial Neural Network_VCW (1).pptx
Artificial Neural Network_VCW (1).pptxArtificial Neural Network_VCW (1).pptx
Artificial Neural Network_VCW (1).pptx
pratik610182
 
Unit 2 ml.pptx
Unit 2 ml.pptxUnit 2 ml.pptx
Unit 2 ml.pptx
PradeeshSAI
 
Activation function
Activation functionActivation function
Activation function
Astha Jain
 
Neural Network_basic_Reza_Lecture_3.pptx
Neural Network_basic_Reza_Lecture_3.pptxNeural Network_basic_Reza_Lecture_3.pptx
Neural Network_basic_Reza_Lecture_3.pptx
shamimreza94
 
Artificial neural networks
Artificial neural networksArtificial neural networks
Artificial neural networks
arjitkantgupta
 
Activation_function.pptx
Activation_function.pptxActivation_function.pptx
Activation_function.pptx
Mohamed Essam
 
08 neural networks
08 neural networks08 neural networks
08 neural networks
ankit_ppt
 
BACKPROPOGATION ALGO.pdfLECTURE NOTES WITH SOLVED EXAMPLE AND FEED FORWARD NE...
BACKPROPOGATION ALGO.pdfLECTURE NOTES WITH SOLVED EXAMPLE AND FEED FORWARD NE...BACKPROPOGATION ALGO.pdfLECTURE NOTES WITH SOLVED EXAMPLE AND FEED FORWARD NE...
BACKPROPOGATION ALGO.pdfLECTURE NOTES WITH SOLVED EXAMPLE AND FEED FORWARD NE...
DurgadeviParamasivam
 
Artificial Neural Networks for NIU session 2016 17
Artificial Neural Networks for NIU session 2016 17 Artificial Neural Networks for NIU session 2016 17
Artificial Neural Networks for NIU session 2016 17
Prof. Neeta Awasthy
 
Soft Computing-173101
Soft Computing-173101Soft Computing-173101
Soft Computing-173101
AMIT KUMAR
 
Understanding Deep Learning & Parameter Tuning with MXnet, H2o Package in R
Understanding Deep Learning & Parameter Tuning with MXnet, H2o Package in RUnderstanding Deep Learning & Parameter Tuning with MXnet, H2o Package in R
Understanding Deep Learning & Parameter Tuning with MXnet, H2o Package in R
Manish Saraswat
 

Similar to Unit 6: Application of AI (20)

UNIT 5-ANN.ppt
UNIT 5-ANN.pptUNIT 5-ANN.ppt
UNIT 5-ANN.ppt
 
Neural-Networks.ppt
Neural-Networks.pptNeural-Networks.ppt
Neural-Networks.ppt
 
Deep learning
Deep learningDeep learning
Deep learning
 
2011 0480.neural-networks
2011 0480.neural-networks2011 0480.neural-networks
2011 0480.neural-networks
 
Neural
NeuralNeural
Neural
 
Multilayer Perceptron Neural Network MLP
Multilayer Perceptron Neural Network MLPMultilayer Perceptron Neural Network MLP
Multilayer Perceptron Neural Network MLP
 
ANN.pptx
ANN.pptxANN.pptx
ANN.pptx
 
Machine learning by using python lesson 2 Neural Networks By Professor Lili S...
Machine learning by using python lesson 2 Neural Networks By Professor Lili S...Machine learning by using python lesson 2 Neural Networks By Professor Lili S...
Machine learning by using python lesson 2 Neural Networks By Professor Lili S...
 
Acem neuralnetworks
Acem neuralnetworksAcem neuralnetworks
Acem neuralnetworks
 
Artificial Neural Network_VCW (1).pptx
Artificial Neural Network_VCW (1).pptxArtificial Neural Network_VCW (1).pptx
Artificial Neural Network_VCW (1).pptx
 
Unit 2 ml.pptx
Unit 2 ml.pptxUnit 2 ml.pptx
Unit 2 ml.pptx
 
Activation function
Activation functionActivation function
Activation function
 
Neural Network_basic_Reza_Lecture_3.pptx
Neural Network_basic_Reza_Lecture_3.pptxNeural Network_basic_Reza_Lecture_3.pptx
Neural Network_basic_Reza_Lecture_3.pptx
 
Artificial neural networks
Artificial neural networksArtificial neural networks
Artificial neural networks
 
Activation_function.pptx
Activation_function.pptxActivation_function.pptx
Activation_function.pptx
 
08 neural networks
08 neural networks08 neural networks
08 neural networks
 
BACKPROPOGATION ALGO.pdfLECTURE NOTES WITH SOLVED EXAMPLE AND FEED FORWARD NE...
BACKPROPOGATION ALGO.pdfLECTURE NOTES WITH SOLVED EXAMPLE AND FEED FORWARD NE...BACKPROPOGATION ALGO.pdfLECTURE NOTES WITH SOLVED EXAMPLE AND FEED FORWARD NE...
BACKPROPOGATION ALGO.pdfLECTURE NOTES WITH SOLVED EXAMPLE AND FEED FORWARD NE...
 
Artificial Neural Networks for NIU session 2016 17
Artificial Neural Networks for NIU session 2016 17 Artificial Neural Networks for NIU session 2016 17
Artificial Neural Networks for NIU session 2016 17
 
Soft Computing-173101
Soft Computing-173101Soft Computing-173101
Soft Computing-173101
 
Understanding Deep Learning & Parameter Tuning with MXnet, H2o Package in R
Understanding Deep Learning & Parameter Tuning with MXnet, H2o Package in RUnderstanding Deep Learning & Parameter Tuning with MXnet, H2o Package in R
Understanding Deep Learning & Parameter Tuning with MXnet, H2o Package in R
 

More from Tekendra Nath Yogi

Unit9:Expert System
Unit9:Expert SystemUnit9:Expert System
Unit9:Expert System
Tekendra Nath Yogi
 
Unit7: Production System
Unit7: Production SystemUnit7: Production System
Unit7: Production System
Tekendra Nath Yogi
 
Unit8: Uncertainty in AI
Unit8: Uncertainty in AIUnit8: Uncertainty in AI
Unit8: Uncertainty in AI
Tekendra Nath Yogi
 
Unit5: Learning
Unit5: LearningUnit5: Learning
Unit5: Learning
Tekendra Nath Yogi
 
Unit4: Knowledge Representation
Unit4: Knowledge RepresentationUnit4: Knowledge Representation
Unit4: Knowledge Representation
Tekendra Nath Yogi
 
Unit10
Unit10Unit10
Unit9
Unit9Unit9
Unit8
Unit8Unit8
Unit7
Unit7Unit7
BIM Data Mining Unit5 by Tekendra Nath Yogi
 BIM Data Mining Unit5 by Tekendra Nath Yogi BIM Data Mining Unit5 by Tekendra Nath Yogi
BIM Data Mining Unit5 by Tekendra Nath Yogi
Tekendra Nath Yogi
 
BIM Data Mining Unit4 by Tekendra Nath Yogi
 BIM Data Mining Unit4 by Tekendra Nath Yogi BIM Data Mining Unit4 by Tekendra Nath Yogi
BIM Data Mining Unit4 by Tekendra Nath Yogi
Tekendra Nath Yogi
 
BIM Data Mining Unit3 by Tekendra Nath Yogi
 BIM Data Mining Unit3 by Tekendra Nath Yogi BIM Data Mining Unit3 by Tekendra Nath Yogi
BIM Data Mining Unit3 by Tekendra Nath Yogi
Tekendra Nath Yogi
 
BIM Data Mining Unit2 by Tekendra Nath Yogi
 BIM Data Mining Unit2 by Tekendra Nath Yogi BIM Data Mining Unit2 by Tekendra Nath Yogi
BIM Data Mining Unit2 by Tekendra Nath Yogi
Tekendra Nath Yogi
 
BIM Data Mining Unit1 by Tekendra Nath Yogi
 BIM Data Mining Unit1 by Tekendra Nath Yogi BIM Data Mining Unit1 by Tekendra Nath Yogi
BIM Data Mining Unit1 by Tekendra Nath Yogi
Tekendra Nath Yogi
 
Unit6
Unit6Unit6
B. SC CSIT Computer Graphics Unit 5 By Tekendra Nath Yogi
B. SC CSIT Computer Graphics Unit 5 By Tekendra Nath YogiB. SC CSIT Computer Graphics Unit 5 By Tekendra Nath Yogi
B. SC CSIT Computer Graphics Unit 5 By Tekendra Nath Yogi
Tekendra Nath Yogi
 
B. SC CSIT Computer Graphics Lab By Tekendra Nath Yogi
B. SC CSIT Computer Graphics Lab By Tekendra Nath YogiB. SC CSIT Computer Graphics Lab By Tekendra Nath Yogi
B. SC CSIT Computer Graphics Lab By Tekendra Nath Yogi
Tekendra Nath Yogi
 
B. SC CSIT Computer Graphics Unit 4 By Tekendra Nath Yogi
B. SC CSIT Computer Graphics Unit 4 By Tekendra Nath YogiB. SC CSIT Computer Graphics Unit 4 By Tekendra Nath Yogi
B. SC CSIT Computer Graphics Unit 4 By Tekendra Nath Yogi
Tekendra Nath Yogi
 
B. SC CSIT Computer Graphics Unit 3 By Tekendra Nath Yogi
B. SC CSIT Computer Graphics Unit 3 By Tekendra Nath YogiB. SC CSIT Computer Graphics Unit 3 By Tekendra Nath Yogi
B. SC CSIT Computer Graphics Unit 3 By Tekendra Nath Yogi
Tekendra Nath Yogi
 
B. SC CSIT Computer Graphics Unit 2 By Tekendra Nath Yogi
B. SC CSIT Computer Graphics Unit 2 By Tekendra Nath YogiB. SC CSIT Computer Graphics Unit 2 By Tekendra Nath Yogi
B. SC CSIT Computer Graphics Unit 2 By Tekendra Nath Yogi
Tekendra Nath Yogi
 

More from Tekendra Nath Yogi (20)

Unit9:Expert System
Unit9:Expert SystemUnit9:Expert System
Unit9:Expert System
 
Unit7: Production System
Unit7: Production SystemUnit7: Production System
Unit7: Production System
 
Unit8: Uncertainty in AI
Unit8: Uncertainty in AIUnit8: Uncertainty in AI
Unit8: Uncertainty in AI
 
Unit5: Learning
Unit5: LearningUnit5: Learning
Unit5: Learning
 
Unit4: Knowledge Representation
Unit4: Knowledge RepresentationUnit4: Knowledge Representation
Unit4: Knowledge Representation
 
Unit10
Unit10Unit10
Unit10
 
Unit9
Unit9Unit9
Unit9
 
Unit8
Unit8Unit8
Unit8
 
Unit7
Unit7Unit7
Unit7
 
BIM Data Mining Unit5 by Tekendra Nath Yogi
 BIM Data Mining Unit5 by Tekendra Nath Yogi BIM Data Mining Unit5 by Tekendra Nath Yogi
BIM Data Mining Unit5 by Tekendra Nath Yogi
 
BIM Data Mining Unit4 by Tekendra Nath Yogi
 BIM Data Mining Unit4 by Tekendra Nath Yogi BIM Data Mining Unit4 by Tekendra Nath Yogi
BIM Data Mining Unit4 by Tekendra Nath Yogi
 
BIM Data Mining Unit3 by Tekendra Nath Yogi
 BIM Data Mining Unit3 by Tekendra Nath Yogi BIM Data Mining Unit3 by Tekendra Nath Yogi
BIM Data Mining Unit3 by Tekendra Nath Yogi
 
BIM Data Mining Unit2 by Tekendra Nath Yogi
 BIM Data Mining Unit2 by Tekendra Nath Yogi BIM Data Mining Unit2 by Tekendra Nath Yogi
BIM Data Mining Unit2 by Tekendra Nath Yogi
 
BIM Data Mining Unit1 by Tekendra Nath Yogi
 BIM Data Mining Unit1 by Tekendra Nath Yogi BIM Data Mining Unit1 by Tekendra Nath Yogi
BIM Data Mining Unit1 by Tekendra Nath Yogi
 
Unit6
Unit6Unit6
Unit6
 
B. SC CSIT Computer Graphics Unit 5 By Tekendra Nath Yogi
B. SC CSIT Computer Graphics Unit 5 By Tekendra Nath YogiB. SC CSIT Computer Graphics Unit 5 By Tekendra Nath Yogi
B. SC CSIT Computer Graphics Unit 5 By Tekendra Nath Yogi
 
B. SC CSIT Computer Graphics Lab By Tekendra Nath Yogi
B. SC CSIT Computer Graphics Lab By Tekendra Nath YogiB. SC CSIT Computer Graphics Lab By Tekendra Nath Yogi
B. SC CSIT Computer Graphics Lab By Tekendra Nath Yogi
 
B. SC CSIT Computer Graphics Unit 4 By Tekendra Nath Yogi
B. SC CSIT Computer Graphics Unit 4 By Tekendra Nath YogiB. SC CSIT Computer Graphics Unit 4 By Tekendra Nath Yogi
B. SC CSIT Computer Graphics Unit 4 By Tekendra Nath Yogi
 
B. SC CSIT Computer Graphics Unit 3 By Tekendra Nath Yogi
B. SC CSIT Computer Graphics Unit 3 By Tekendra Nath YogiB. SC CSIT Computer Graphics Unit 3 By Tekendra Nath Yogi
B. SC CSIT Computer Graphics Unit 3 By Tekendra Nath Yogi
 
B. SC CSIT Computer Graphics Unit 2 By Tekendra Nath Yogi
B. SC CSIT Computer Graphics Unit 2 By Tekendra Nath YogiB. SC CSIT Computer Graphics Unit 2 By Tekendra Nath Yogi
B. SC CSIT Computer Graphics Unit 2 By Tekendra Nath Yogi
 

Recently uploaded

Authoring a personal GPT for your research and practice: How we created the Q...
Authoring a personal GPT for your research and practice: How we created the Q...Authoring a personal GPT for your research and practice: How we created the Q...
Authoring a personal GPT for your research and practice: How we created the Q...
Leonel Morgado
 
8.Isolation of pure cultures and preservation of cultures.pdf
8.Isolation of pure cultures and preservation of cultures.pdf8.Isolation of pure cultures and preservation of cultures.pdf
8.Isolation of pure cultures and preservation of cultures.pdf
by6843629
 
Phenomics assisted breeding in crop improvement
Phenomics assisted breeding in crop improvementPhenomics assisted breeding in crop improvement
Phenomics assisted breeding in crop improvement
IshaGoswami9
 
molar-distalization in orthodontics-seminar.pptx
molar-distalization in orthodontics-seminar.pptxmolar-distalization in orthodontics-seminar.pptx
molar-distalization in orthodontics-seminar.pptx
Anagha Prasad
 
Deep Software Variability and Frictionless Reproducibility
Deep Software Variability and Frictionless ReproducibilityDeep Software Variability and Frictionless Reproducibility
Deep Software Variability and Frictionless Reproducibility
University of Rennes, INSA Rennes, Inria/IRISA, CNRS
 
Topic: SICKLE CELL DISEASE IN CHILDREN-3.pdf
Topic: SICKLE CELL DISEASE IN CHILDREN-3.pdfTopic: SICKLE CELL DISEASE IN CHILDREN-3.pdf
Topic: SICKLE CELL DISEASE IN CHILDREN-3.pdf
TinyAnderson
 
Basics of crystallography, crystal systems, classes and different forms
Basics of crystallography, crystal systems, classes and different formsBasics of crystallography, crystal systems, classes and different forms
Basics of crystallography, crystal systems, classes and different forms
MaheshaNanjegowda
 
Oedema_types_causes_pathophysiology.pptx
Oedema_types_causes_pathophysiology.pptxOedema_types_causes_pathophysiology.pptx
Oedema_types_causes_pathophysiology.pptx
muralinath2
 
Medical Orthopedic PowerPoint Templates.pptx
Medical Orthopedic PowerPoint Templates.pptxMedical Orthopedic PowerPoint Templates.pptx
Medical Orthopedic PowerPoint Templates.pptx
terusbelajar5
 
Bob Reedy - Nitrate in Texas Groundwater.pdf
Bob Reedy - Nitrate in Texas Groundwater.pdfBob Reedy - Nitrate in Texas Groundwater.pdf
Bob Reedy - Nitrate in Texas Groundwater.pdf
Texas Alliance of Groundwater Districts
 
aziz sancar nobel prize winner: from mardin to nobel
aziz sancar nobel prize winner: from mardin to nobelaziz sancar nobel prize winner: from mardin to nobel
aziz sancar nobel prize winner: from mardin to nobel
İsa Badur
 
Immersive Learning That Works: Research Grounding and Paths Forward
Immersive Learning That Works: Research Grounding and Paths ForwardImmersive Learning That Works: Research Grounding and Paths Forward
Immersive Learning That Works: Research Grounding and Paths Forward
Leonel Morgado
 
Randomised Optimisation Algorithms in DAPHNE
Randomised Optimisation Algorithms in DAPHNERandomised Optimisation Algorithms in DAPHNE
Randomised Optimisation Algorithms in DAPHNE
University of Maribor
 
Remote Sensing and Computational, Evolutionary, Supercomputing, and Intellige...
Remote Sensing and Computational, Evolutionary, Supercomputing, and Intellige...Remote Sensing and Computational, Evolutionary, Supercomputing, and Intellige...
Remote Sensing and Computational, Evolutionary, Supercomputing, and Intellige...
University of Maribor
 
Thornton ESPP slides UK WW Network 4_6_24.pdf
Thornton ESPP slides UK WW Network 4_6_24.pdfThornton ESPP slides UK WW Network 4_6_24.pdf
Thornton ESPP slides UK WW Network 4_6_24.pdf
European Sustainable Phosphorus Platform
 
Travis Hills' Endeavors in Minnesota: Fostering Environmental and Economic Pr...
Travis Hills' Endeavors in Minnesota: Fostering Environmental and Economic Pr...Travis Hills' Endeavors in Minnesota: Fostering Environmental and Economic Pr...
Travis Hills' Endeavors in Minnesota: Fostering Environmental and Economic Pr...
Travis Hills MN
 
20240520 Planning a Circuit Simulator in JavaScript.pptx
20240520 Planning a Circuit Simulator in JavaScript.pptx20240520 Planning a Circuit Simulator in JavaScript.pptx
20240520 Planning a Circuit Simulator in JavaScript.pptx
Sharon Liu
 
bordetella pertussis.................................ppt
bordetella pertussis.................................pptbordetella pertussis.................................ppt
bordetella pertussis.................................ppt
kejapriya1
 
Cytokines and their role in immune regulation.pptx
Cytokines and their role in immune regulation.pptxCytokines and their role in immune regulation.pptx
Cytokines and their role in immune regulation.pptx
Hitesh Sikarwar
 
如何办理(uvic毕业证书)维多利亚大学毕业证本科学位证书原版一模一样
如何办理(uvic毕业证书)维多利亚大学毕业证本科学位证书原版一模一样如何办理(uvic毕业证书)维多利亚大学毕业证本科学位证书原版一模一样
如何办理(uvic毕业证书)维多利亚大学毕业证本科学位证书原版一模一样
yqqaatn0
 

Recently uploaded (20)

Authoring a personal GPT for your research and practice: How we created the Q...
Authoring a personal GPT for your research and practice: How we created the Q...Authoring a personal GPT for your research and practice: How we created the Q...
Authoring a personal GPT for your research and practice: How we created the Q...
 
8.Isolation of pure cultures and preservation of cultures.pdf
8.Isolation of pure cultures and preservation of cultures.pdf8.Isolation of pure cultures and preservation of cultures.pdf
8.Isolation of pure cultures and preservation of cultures.pdf
 
Phenomics assisted breeding in crop improvement
Phenomics assisted breeding in crop improvementPhenomics assisted breeding in crop improvement
Phenomics assisted breeding in crop improvement
 
molar-distalization in orthodontics-seminar.pptx
molar-distalization in orthodontics-seminar.pptxmolar-distalization in orthodontics-seminar.pptx
molar-distalization in orthodontics-seminar.pptx
 
Deep Software Variability and Frictionless Reproducibility
Deep Software Variability and Frictionless ReproducibilityDeep Software Variability and Frictionless Reproducibility
Deep Software Variability and Frictionless Reproducibility
 
Topic: SICKLE CELL DISEASE IN CHILDREN-3.pdf
Topic: SICKLE CELL DISEASE IN CHILDREN-3.pdfTopic: SICKLE CELL DISEASE IN CHILDREN-3.pdf
Topic: SICKLE CELL DISEASE IN CHILDREN-3.pdf
 
Basics of crystallography, crystal systems, classes and different forms
Basics of crystallography, crystal systems, classes and different formsBasics of crystallography, crystal systems, classes and different forms
Basics of crystallography, crystal systems, classes and different forms
 
Oedema_types_causes_pathophysiology.pptx
Oedema_types_causes_pathophysiology.pptxOedema_types_causes_pathophysiology.pptx
Oedema_types_causes_pathophysiology.pptx
 
Medical Orthopedic PowerPoint Templates.pptx
Medical Orthopedic PowerPoint Templates.pptxMedical Orthopedic PowerPoint Templates.pptx
Medical Orthopedic PowerPoint Templates.pptx
 
Bob Reedy - Nitrate in Texas Groundwater.pdf
Bob Reedy - Nitrate in Texas Groundwater.pdfBob Reedy - Nitrate in Texas Groundwater.pdf
Bob Reedy - Nitrate in Texas Groundwater.pdf
 
aziz sancar nobel prize winner: from mardin to nobel
aziz sancar nobel prize winner: from mardin to nobelaziz sancar nobel prize winner: from mardin to nobel
aziz sancar nobel prize winner: from mardin to nobel
 
Immersive Learning That Works: Research Grounding and Paths Forward
Immersive Learning That Works: Research Grounding and Paths ForwardImmersive Learning That Works: Research Grounding and Paths Forward
Immersive Learning That Works: Research Grounding and Paths Forward
 
Randomised Optimisation Algorithms in DAPHNE
Randomised Optimisation Algorithms in DAPHNERandomised Optimisation Algorithms in DAPHNE
Randomised Optimisation Algorithms in DAPHNE
 
Remote Sensing and Computational, Evolutionary, Supercomputing, and Intellige...
Remote Sensing and Computational, Evolutionary, Supercomputing, and Intellige...Remote Sensing and Computational, Evolutionary, Supercomputing, and Intellige...
Remote Sensing and Computational, Evolutionary, Supercomputing, and Intellige...
 
Thornton ESPP slides UK WW Network 4_6_24.pdf
Thornton ESPP slides UK WW Network 4_6_24.pdfThornton ESPP slides UK WW Network 4_6_24.pdf
Thornton ESPP slides UK WW Network 4_6_24.pdf
 
Travis Hills' Endeavors in Minnesota: Fostering Environmental and Economic Pr...
Travis Hills' Endeavors in Minnesota: Fostering Environmental and Economic Pr...Travis Hills' Endeavors in Minnesota: Fostering Environmental and Economic Pr...
Travis Hills' Endeavors in Minnesota: Fostering Environmental and Economic Pr...
 
20240520 Planning a Circuit Simulator in JavaScript.pptx
20240520 Planning a Circuit Simulator in JavaScript.pptx20240520 Planning a Circuit Simulator in JavaScript.pptx
20240520 Planning a Circuit Simulator in JavaScript.pptx
 
bordetella pertussis.................................ppt
bordetella pertussis.................................pptbordetella pertussis.................................ppt
bordetella pertussis.................................ppt
 
Cytokines and their role in immune regulation.pptx
Cytokines and their role in immune regulation.pptxCytokines and their role in immune regulation.pptx
Cytokines and their role in immune regulation.pptx
 
如何办理(uvic毕业证书)维多利亚大学毕业证本科学位证书原版一模一样
如何办理(uvic毕业证书)维多利亚大学毕业证本科学位证书原版一模一样如何办理(uvic毕业证书)维多利亚大学毕业证本科学位证书原版一模一样
如何办理(uvic毕业证书)维多利亚大学毕业证本科学位证书原版一模一样
 

Unit 6: Application of AI

  • 1. Unit 6 Application Of AI Presented By : Tekendra Nath Yogi Tekendranath@gmail.com College Of Applied Business And Technology
  • 2. Biological Neuron • A nerve cell (neuron) is a special biological cell that processes information. • According to an estimation, there are huge number of neurons, approximately 1011 with numerous interconnections, approximately 1015. 2Presented By: Tekendra Nath Yogi
  • 3. Biological Neuron 3Presented By: Tekendra Nath Yogi
  • 4. Biological Neuron • Working of a Biological Neuron – As shown in the above diagram, a typical neuron consists of the following four parts with the help of which we can explain its working: – Dendrites: They are tree-like branches, responsible for receiving the information from other neurons it is connected to. In other sense, we can say that they are like the ears of neuron. – Soma: It is the cell body of the neuron and is responsible for processing of information, they have received from dendrites. – Axon: It is just like a cable through which neurons send the information. – Synapses: It is the connection between the axon and other neuron dendrites. 4Presented By: Tekendra Nath Yogi
  • 5. Biological Neural Networks • Neuron is a cell in brain whose principle function is the collection, Processing, and dissemination of signals. • Networks of such neurons is called neural network . • Neural network is capable of information processing. • It is also known as: – Connectionism – Parallel and distributed processing and – neural computing. 5Presented By: Tekendra Nath Yogi
  • 6. What is a Artificial Neural Network? • An Artificial Neural Network (ANN) is an information processing paradigm(model) that is inspired by the way biological nervous systems process information. • The key elements of this paradigm are: – Nodes(units): Nodes represent a cell of neural network. – Links: Links are directed arrows that show propagation of information from one node to another node. – Activation: Activations are inputs to or outputs from a unit. – Weight: Each link has weight associated with it which determines strength of the connection. – Activation function: A function which is used to derive output activation from the input activations to a given node is called activation function. 6Presented By: Tekendra Nath Yogi
  • 8. Why use neural networks? • Ability to derive meaning from complicated or imprecise data. • Other reasons for using the ANNs are as follows: – Adaptive learning: An ability to learn how to do tasks based on the data given for training or initial experience. – Self-Organization: An ANN can create its own organization or representation of the information it receives during learning time. – Real Time Operation: ANN computations may be carried out in parallel – Fault Tolerance via Redundant Information Coding: Partial destruction of a network leads to the corresponding degradation of performance. However, some network capabilities may be retained even with major network damage 8Presented By: Tekendra Nath Yogi
  • 9. Neural networks versus conventional computers • Parallelism is NNs characteristics, because the computations of its components are largely independent of each other. Neural networks and conventional algorithmic computers are not in competition but complement each other. • Different approach used to problem solving is the main difference between neural networks and conventional computers. • Conventional computers use an algorithmic approach, for instance, the conventional computer follows a set of instructions in order to solve a problem. • Thus, the specific steps must be well-defined so that the computer can follow to solve the problem. • This rule restricts the problem solving ability of conventional computers. That means the computer must be given exactly how to solve a problem. 9Presented By: Tekendra Nath Yogi
  • 10. Neural networks versus conventional computers • As for neural networks, information is processed in much a similar way the human brain does. • A NN consists of a large amounts of highly interconnected processing elements--neurons which works in parallel to solve a specific problem. • Neural networks can learn by examples but they cannot be programmed to perform a specific task like conventional computers. • Moreover, the examples must be selected carefully otherwise it might be functioning incorrectly or waste time. • This problem arises because of no method found currently to testify if the system is faulty or not. 10Presented By: Tekendra Nath Yogi
  • 11. Basic model of ANN • Processing of ANN depends upon the following three building blocks: – Network Architecture(interconnections) – Adjustments of Weights or Learning rules – Activation Functions 11Presented By: Tekendra Nath Yogi Basic Models of ANN Interconnections Learning rules Activation function
  • 12. First artificial neurons: The McCulloch-Pitts model • The McCulloch-Pitts model is an extremely simple artificial neuron as shown in figure below: 12Presented By: Tekendra Nath Yogi
  • 13. Contd.. • In the figure, the variables w1, w2, w3….. Wm are called "weights". x1, x2, x3,… Xm represent the inputs. • Net input can be calculated as a weighted sum of its inputs (actual input): Net Input = x1w1 + x2w2 + x3w3 + ...+xmwm= • Then check if (Net input < Threshold) or not (i.e., apply the activation function to the Net Input to derive the output). – If it is, then the output is made zero. – Otherwise, it is made a one. Note: The inputs can be either a zero or a one. And the output is a zero or a one. 13Presented By: Tekendra Nath Yogi
  • 14. Activation functions • It is also known as the transfer function. • Activation function typically falls into one of three main categories: – Linear activation function • Piecewise , and • Ramp – Threshold activation function • Step, and • sign – Sigmoid activation function • Binary, and • Bipolar 14Presented By: Tekendra Nath Yogi
  • 15. Linear Activation • Also known as identity activation function. For linear activation functions, the output activity is proportional to the total weighted sum(Net Input). – Output (Y)= mx + c, where m and c are constant • Neurons with the linear function are often used for linear approximation. 15Presented By: Tekendra Nath Yogi
  • 16. Contd.. • Linear activation functions are of two types: – Piecewise, And – Ramp • Ramp Linear activation function: – Also known as Rectifier, or Max. – Output(Y) = Max(0, Net input) Presented By: Tekendra Nath Yogi 16
  • 17. Contd… • Piecewise Linear Activation: – Choose some xmin and xmax which is our "range". – Everything less than this range will be 0, and everything greater than this range will be 1. – Anything else is linearly-interpolated between. – Here x is net input, and – f(x) is output Presented By: Tekendra Nath Yogi 17
  • 18. • Also known as hard limit functions. For threshold activation functions, the output are set at one of two levels, depending on whether the Net_input is greater than or less than some threshold value. • Threshold activation functions are of two types: – Step activation function ,and – sign activation function. • hard limit functions, are often used in decision-making neurons for classification and pattern recognition tasks. 18Presented By: Tekendra Nath Yogi Threshold(Unit ) activation Function
  • 19. output ( y) 19Presented By: Tekendra Nath Yogi Step(Binary) activation
  • 21. • For sigmoid activation functions, the output varies continuously but not linearly as the input changes. 21Presented By: Tekendra Nath Yogi Sigmoid Activation Function
  • 22. Sigmoid Activation Function • It is of two type as follows: Binary sigmoid function: This activation function performs input editing between 0 and 1. It is positive in nature. It is always bounded, which means its output cannot be less than 0 and more than 1. It is also strictly increasing in nature, which means more the input higher would be the output. It can be defined as: output (Y) 22Presented By: Tekendra Nath Yogi
  • 23. Sigmoid Activation Function • Bipolar sigmoid function: – This activation function performs input editing between -1 and 1. It can be positive or negative in nature. It is always bounded, which means its output cannot be less than -1 and more than 1. It is also strictly increasing in nature like sigmoid function. It can be defined as Output(Y) 23Presented By: Tekendra Nath Yogi
  • 24. First artificial neurons: The McCulloch-Pitts model • The McCulloch-Pitts model is an extremely simple artificial neuron as shown in figure below: 24Presented By: Tekendra Nath Yogi
  • 25. Contd.. • In the figure, the variables w1, w2, w3….. Wm are called "weights". x1, x2, x3,… Xm represent the inputs. • Net input is calculated as a weighted sum of its inputs: Net Input = x1w1 + x2w2 + x3w3 + ...+XmWm= • Then check if (Net input < Threshold) or not (i.e., apply the activation function to the Net Input to derive the output). – If it is, then the output is made zero. – Otherwise, it is made a one. Note: The inputs can be either a zero or a one. And the output is a zero or a one. 25Presented By: Tekendra Nath Yogi
  • 26. Logic Gates with MP Neurons • We can use McCulloch-Pitts neurons to implement the basic logic gates. • All we need to do is find the appropriate connection weights and neuron thresholds to produce the right outputs for each set of inputs.
  • 27. Realization of AND gate • Looking at the logic table for the A^B, we can see that we only want the neuron to output a 1 when both inputs are activated. 27Presented By: Tekendra Nath Yogi
  • 28. Contd.. • To do this, we want the sum of both inputs to be greater than the threshold, but each input alone must be lower than the threshold. • Let's use a threshold of 1 (simple and convenient!). So, now we need to choose the weights according to the constraints : – E.g., 0.6 and 0.6 • With these weights, individual activation of either input A or B will not exceed the threshold, while the sum of the two will be 1.2, which exceeds the threshold and causes the neuron to fire. Here is a diagram: 28Presented By: Tekendra Nath Yogi
  • 29. Contd… • Here, letter theta denote the threshold. • A and B are the two inputs. They will each be set to 1 or 0 depending upon the truth of their proposition. • The red neuron is our decision neuron. If the sum of the weighted inputs is greater than the threshold, this will output a 1, otherwise it will output 0 . • So, to test it by hand, we can try setting A and B to the different values in the truth table and seeing if the decision neuron's output matches the A^B column: 29Presented By: Tekendra Nath Yogi
  • 30. Contd.. • If A=0 & B=0 --> 0*0.6 + 0*0.6 = 0. This is not greater than the threshold of 1, so the output = 0. Good! • If A=0 & B=1 --> 0*0.6 + 1*0.6 = 0.6. This is not greater than the threshold, so the output = 0. Good. • If A=1 & B=0 --> exactly the same as above. Good. • If A=1 & B=1 --> 1*0.6 + 1*0.6 = 1.2. This exceeds the threshold, so the output = 1. Good. 30Presented By: Tekendra Nath Yogi
  • 31. Realization of OR Gate • Logic table for the OR GATE is as shown in below: • In this case, we want the output to be 1 when either or both of the inputs, A and B, are active, but 0 when both of the inputs are 0. 31Presented By: Tekendra Nath Yogi
  • 32. Contd.. • This is simple enough. If we make each synapse greater than the threshold, then it'll fire whenever there is any activity in either or both of A and B. This is shown in figure below. Synaptic values of 1.1 are sufficient to surpass the threshold of 1 whenever their respective input is active. 32Presented By: Tekendra Nath Yogi
  • 33. Realization of NOT Gate • Logic table for the NOT gate is as shown in table below: • The NOT operator simply negates the input, When A is true (value 1), ¬A is false (value 0) and vice-versa. 33Presented By: Tekendra Nath Yogi A ¬ A 0 1 1 0
  • 34. Contd... • This is a little tricky. We have to change a 1 to a 0 - this is easy, just make sure that the input doesn't exceed the threshold. • However, we also have to change a 0 to a 1 - how can we do this? • The answer is to think of our decision neuron as a tonic neuron - one whose natural state is active. • To make this, all we do is set the threshold lower than 0, so even when it receives no input, it still exceeds the threshold. • In fact, we have to set the synapse to a negative number (here use -1) and the threshold to some number between that and 0 ( use - 0.5). 34Presented By: Tekendra Nath Yogi
  • 35. Contd.. • The decision neuron shown below, with threshold -0.5 and synapse weight -1, will reverse the input: A = 1 --> output = 0 A = 0 --> output = 1 35Presented By: Tekendra Nath Yogi
  • 36. Realization of XOR Gate • XOR (exclusive OR) operator actually put a spanner in the works of neural network research for a long time because it is not possible to create an XOR gate with a single neuron, or even a single layer of neurons - we need to have two layers. • The truth table for the XOR gate is as shown below: 36Presented By: Tekendra Nath Yogi
  • 37. Contd.. • Exclusive OR means that we want a truth value of 1 either when A is 1, or when B is 1(i.e. A and B have different truth value), but not when both A and B are 1 or 0(i.e., both have same truth value). • The truth table on the above shows this. • To use a real world example, in the question "would you like tea or coffee?", the "or" is actually and exclusive or, because the person is offering you one or the other, but not both. Contrast this with "would you like milk or sugar?". In this case you can have milk, or sugar, or both. 37Presented By: Tekendra Nath Yogi
  • 38. Contd.. • The only way to solve this problem is to have a bunch of neurons working together. start by breaking down the XOR operation into a number of simpler logical functions: • A xor B = (AvB) ^ ¬(A^B) • This line of logic contains three important operations: an OR operator in the brackets on the left, an AND operator in the brackets on the right, and another AND operator in the middle. • We can create a neuron for each of these operations, and stick them together like this: 38Presented By: Tekendra Nath Yogi
  • 39. Contd.. • The upper of the two red neurons in the first layer has two inputs with synaptic weights of 0.6 each and a threshold of 1, exactly the same as the AND gate we made earlier. This is the AND function in the brackets on the right of the formula wrote earlier. • Notice that it is connected to the output neuron with a negative synaptic weight (- 2). This accounts for the NOT operator that precedes the brackets on the right hand side. • The lower of the two red neurons in the first layer has two synaptic weights of 1.1 and a threshold of 1, just like the OR gate we made earlier. This neuron is doing the job of the OR operator in the brackets on the left of the formula. • The output neuron is performing another AND operation - the one in the middle of the formula. Practically, this output neuron is active whenever one of the inputs, A or B is on, but it is overpowered by the inhibition of the upper neuron in cases when both A and B are on. 39Presented By: Tekendra Nath Yogi
  • 40. Learning in Neural Networks: • Learning: – Learning in neural networks is carried out by adjusting the connection weights among neurons. – There is no algorithm that determines how the weights should be assigned in order to solve specific problems. Hence, the weights are determined by a learning process • Learning may be classified into two categories: – Supervised Learning – Unsupervised Learning 40Presented By: Tekendra Nath Yogi
  • 41. 1) Supervised Learning: – In supervised learning, the network is presented with inputs together with the target (teacher signal) outputs. – Then, the neural network tries to produce an output as close as possible to the target output by adjusting the values of internal weights. – The most common supervised learning method is the “error correction method”. • Neural networks are trained with this method in order to reduce the error (difference between the network's output and the desired output) to zero. 41Presented By: Tekendra Nath Yogi Learning in Neural Networks:
  • 42. 2) Unsupervised Learning: – In unsupervised learning, there is no teacher (target signal/output) from outside and the network adjusts its weights in response to only the input patterns – A typical example of unsupervised learning is Hebbian learning. 42Presented By: Tekendra Nath Yogi Learning in Neural Networks:
  • 43. Network Architecture 1) Feed-forward networks: – Feed-forward ANNs allow signals to travel one way only; from input to output. – There is no feedback (loops) i.e. the output of any layer does not affect that same layer. – Feed-forward ANNs tend to be straight forward networks that associate inputs with outputs. 43Presented By: Tekendra Nath Yogi
  • 44. Types of Feed Forward Neural Network: a) Single-layer neural networks A neural network in which all the inputs connected directly to the outputs is called a single-layer neural network. 44Presented By: Tekendra Nath Yogi
  • 45. Types of Feed Forward Neural Network: a) Single-layer Feed Forward neural networks : Two types: – Perceptron, and – ADLINE 45Presented By: Tekendra Nath Yogi
  • 46. Perceptron • Developed by Frank Rosenblatt by using McCulloch and Pitts model, perceptron is the basic operational unit of artificial neural networks. • It employs supervised learning rule and is able to classify the data into two classes. • Operational characteristics of the perceptron: – It consists of a single neuron with an arbitrary number of inputs along with adjustable weights, but the output of the neuron is 1 or -1 depending upon the input. It also consists of a bias whose weight is always 1. • Following figure gives a schematic representation of the perceptron. 46Presented By: Tekendra Nath Yogi
  • 47. Perceptron • Following figure gives a schematic representation of the perceptron: 47Presented By: Tekendra Nath Yogi
  • 48. Perceptron • Perceptron thus has the following three basic elements: – Links: It would have a set of connection links, which carries a weight including a bias always having weight 1. – Adder: It adds the input after they are multiplied with their respective weights. It also add up the bias. – Activation function: It limits the output of neuron. The most basic activation function is a sign function that has two possible outputs. This function returns 1, if the input is positive, and -1 for any negative input. 48Presented By: Tekendra Nath Yogi
  • 49. Adaptive Linear Neuron (ADALINE) • ADALINE which stands for Adaptive Linear Neuron, is a network having a single linear unit. • It was developed by Widrow and Hoff in 1960. Some important points about ADALINE are as follows: – It uses bipolar activation function. – It uses delta rule for training to minimize the Mean-Squared Error (MSE) between the actual output and the desired/target output. – The weights and the bias are adjustable. 49Presented By: Tekendra Nath Yogi
  • 50. Adaptive Linear Neuron (ADALINE) • Architecture : – The basic structure of ADALINE is similar to perceptron having an extra feedback loop with the help of which the calculated output is compared with the desired/target output. – After comparison on the basis of training algorithm, the weights and bias will be updated. 50Presented By: Tekendra Nath Yogi
  • 51. b) Multilayer neural networks The neural network which contains input layers, output layers and some hidden layers also is called multilayer neural network. 51Presented By: Tekendra Nath Yogi Types of Feed Forward Neural Network:
  • 52. Network Architectures 2) Feedback networks (Recurrent networks:) • Feedback networks can have signals traveling in both directions by introducing loops in the network. – very powerful – extremely complicated. – dynamic: Their 'state' is changing continuously until they reach an equilibrium point. • also known as interactive or recurrent. 52Presented By: Tekendra Nath Yogi
  • 53. Applications of Neural Network • Speech recognition • Optical character recognition • Face Recognition • Pronunciation (NETtalk) • Stock-market prediction • Navigation of a car • Signal processing/Communication • Imaging/Vision • …. 53Presented By: Tekendra Nath Yogi
  • 54. Natural Language processing • What is Natural language: – Natural language refers to the way we, humans, communicate with each other. – English, Spanish, French, and Nepali are all examples of a natural language. – Natural communication takes place in the form of speech and text. 54Presented By: Tekendra Nath Yogi
  • 55. What is Natural Language processing • Natural language processing is a technology which involves converting spoken or written human language into a form which can be processed by computers, and vice versa. • The goal of NLP is to make interactions between computers and humans feel exactly like interactions between humans and humans. 55Presented By: Tekendra Nath Yogi
  • 56. Components of NLP – Because computers operate on artificial languages, they are unable to understand natural language. This is the problem that NLP solves. – With NLP, a computer is able to listen to a natural language being spoken by a person, understand the meaning of it, and then if needed, respond to it by generating natural language to communicate back to the person. • There are two components of NLP as given – Natural Language Understanding (NLU) – Natural Language Generation (NLG) 56Presented By: Tekendra Nath Yogi
  • 58. Natural language Understanding(NLU): • The most difficult part of NLP is understanding, or providing meaning to the natural language that the computer received. • To understand a language a system should have understanding about: – structures of the language including what the words are and how they combine into phrases and sentences. – the meanings of the words and how they contribute to the meanings of the sentence and – context within which they are being used 58Presented By: Tekendra Nath Yogi
  • 59. Natural language Understanding: • But, Developing programs that understand a natural language is a difficult and challenging task in the AI due to the following reasons: – Natural languages are large: They contain infinity of different sentences. No matter how many sentences a person has heard or seen, new ones can always be produced. – much ambiguity in a natural language: • Many words have several meanings such as can, bear, bank etc, and • sentences have different meanings in different contexts. – Example :- a can of juice. I can do it. 59Presented By: Tekendra Nath Yogi
  • 60. Natural Language Generation(NLG) • NLG is much simpler to accomplish. NLG translates a computer’s artificial language into text, and can also go a step further by translating that text into audible speech with text-to- speech. 60Presented By: Tekendra Nath Yogi
  • 61. NLG v/s NLU • NLG may be viewed as the opposite NLU. • The difference can be : – in natural language understanding the system needs to disambiguate the input sentence to produce the machine representation language,. – in NLG the system needs to make decisions about how to put a concept into words. – The NLU is harder than NLG.
  • 62. Steps in Natural Language processing • There are general five steps as shown in figure below: 62Presented By: Tekendra Nath Yogi
  • 63. Contd.. • Lexical Analysis: – It involves identifying and analyzing the structure of words. – Lexicon of a language means the collection of words and phrases in a language. – Lexical analysis is dividing the whole chunk of text into paragraphs, sentences, and words. 63Presented By: Tekendra Nath Yogi
  • 64. Contd.. • Syntactic Analysis (Parsing): – It involves analysis of words in the sentence for grammar and arranging words in a manner that shows the relationship among the words. – The sentence such as “The school goes to boy” is rejected by English syntactic analyzer. 64Presented By: Tekendra Nath Yogi
  • 65. Contd… • Semantic Analysis: – It draws the exact meaning or the dictionary meaning from the text. – The text is checked for meaningfulness. – It is done by mapping syntactic structures and objects in the task domain. – The semantic analyzer disregards sentence such as “hot ice-cream”. 65Presented By: Tekendra Nath Yogi
  • 66. Contd… • Discourse Integration: – The meaning of any sentence depends upon the meaning of the sentence just before it. – In addition, it also brings about the meaning of immediately succeeding sentence. 66Presented By: Tekendra Nath Yogi
  • 67. Contd.. • Pragmatic Analysis: – During this, what was said is re-interpreted on what it actually meant. – It involves deriving those aspects of language which require real world knowledge. 67Presented By: Tekendra Nath Yogi
  • 68. Applications of NLP • Text-to-Speech & Speech recognition • Information Retrieval • Information Extraction • Document Classification • Automatic Summarization • Machine Translation • Story understanding systems • Question answering • Spelling correction • Caption Generation 68Presented By: Tekendra Nath Yogi
  • 69. Assignment • Differentiate between connectionist and Symbolic AI. • What is the difference between human intelligence and machine intelligence ? • Explain McCulloch-Pitts model of neuron. • Compare the performance of a computer and that of a biological neural network in terms of speed of processing, size and complexity, storage, fault tolerance and control mechanism. • How does an artificial neural network model the brain? Describe two major classes of learning paradigms: supervised learning and unsupervised (self-organised) learning. What are the features that distinguish these two paradigms from each other? • Construct a multilayer perceptron with an input layer of six neurons, a hidden layer of four neuron and an output layer of two neurons. • Demonstrate perceptron learning of the binary logic function OR. • Why natural language processing is required in AI System? Differentiate between NLU and NLG. Presented By: Tekendra Nath Yogi 69
  • 70. Thank You ! Presented By: Tekendra Nath Yogi 70