SlideShare a Scribd company logo
1 of 68
Download to read offline
Unit 5: Learning LH 6
Presented By : Tekendra Nath Yogi
Tekendranath@gmail.com
College Of Applied Business And Technology
2/2/2019 1Presented By: Tekendra Nath Yogi
Contd…
• Outline:
– 5.1 Why learning?
– 5.2 Supervised (Error based) learning
• 5.2.1 Gradient descent learning: Least Mean Square, Back
Propagation algorithm
• 5.2.2 Stochastic learning
– 5.3 Unsupervised learning
• 5.3.1 Hebbian learning algorithm
• 5.3.2 Competitive learning
– 5.4 Reinforced learning (output based)
– 5.5 Genetic algorithms: operators
22/2/2019 Presented By: Tekendra Nath Yogi
What is learning
• Learning is the process of acquiring new or modifying
existing knowledge, behaviors, skills, values, or preferences.
• Evidence that learning has occurred may be seen in changes in
behavior from simple to complex.
32/2/2019 Presented By: Tekendra Nath Yogi
What is Machine learning?
• The machine Learning denotes changes in the system that are adaptive in
the sense that they enable the system to do the same task more effectively
the next time.
• Like human learning from past experiences, computer system learns from
data, which represent some “past experiences” of an application domain.
• Therefore, Machine learning gives computers the ability to learn without
being explicitly programmed.
42/2/2019 Presented By: Tekendra Nath Yogi
Why machine learning?
• To learn a target function (relation between input and output)that can be
used to predict the values of a discrete class attribute,
– e.g., male or female, and high-risk or low risk, etc.
• To model the underlying structure or distribution in the data in order to
learn more about the data.
• To learns behavior through trial-and-error interactions with a dynamic
environment.
52/2/2019 Presented By: Tekendra Nath Yogi
Types of Machine Learning
• Based on training set machine learning algorithms are classified into the
following three categories:
62/2/2019 Presented By: Tekendra Nath Yogi
Supervised learning
• Supervised learning is where you have input variables (x) and an output
variable (Y) and you use an algorithm to learn the mapping function from the
input to the output. Y = f(X)
• Learning stops when the algorithm achieves an acceptable level of
performance.
• when you have new input data (x) that you can predict the output variables (Y)
for that data.
f
2/2/2019 7Presented By: Tekendra Nath Yogi
Supervised learning
• It is called supervised learning because the process of an algorithm learning
from the training dataset can be thought of as a teacher supervising the learning
process.
• We know the correct answers, the algorithm iteratively makes predictions on
the training data and is corrected by the teacher.
2/2/2019 8Presented By: Tekendra Nath Yogi
Supervised Learning Example 1
2/2/2019 9Presented By: Tekendra Nath Yogi
Application Example of Supervised learning
• Suppose we have a dataset giving the living areas and prices of house from KTM:
2/2/2019 10Presented By: Tekendra Nath Yogi
Application Example of Supervised learning
• We can plot this data:
2/2/2019 11Presented By: Tekendra Nath Yogi
Application Example of Supervised learning
• Given a data like this , we can learn to predict the price of other houses in KTM
as a function of the size of their area, such types of learning is known as
supervised learning.
• Price of house = f(area of house)
i.e., y= f(x)
2/2/2019 12Presented By: Tekendra Nath Yogi
Unsupervised Learning
• Unsupervised learning is where you only have input data (X) and no
corresponding output variables(targets/ labels).
• The goal for unsupervised learning is to model the underlying structure
or distribution in the data in order to learn more about the data.
• These are called unsupervised learning because unlike supervised
learning above there is no correct answers and there is no teacher.
• Algorithms are left to their own devises to discover and present the
interesting structure in the data.
• E.g., Clustering
2/2/2019 13Presented By: Tekendra Nath Yogi
Unsupervised learning Example1
2/2/2019 14Presented By: Tekendra Nath Yogi
Unsupervised learning example 2
2/2/2019 15Presented By: Tekendra Nath Yogi
Reinforcement learning
• Is learning behavior through trial-and-error interactions with a environment.
• Is learning how to act in order to maximize a reward (Encouragements).
• Reinforcement learning emphasizes learning feedback that evaluates the
learner's performance without providing standards of correctness in the form of
behavioral targets.
• Example: Bicycle learning, game playing, etc.
2/2/2019 16Presented By: Tekendra Nath Yogi
Contd…
172/2/2019 Presented By: Tekendra Nath Yogi
Supervised learning Algorithm
• Classification:
– To predict the outcome of a given sample where the output variable is
in the form of categories(discrete). Examples include labels such as
male and female, sick and healthy.
• Regression:
– To predict the outcome of a given sample where the output variable is
in the form of real values(continuous). Examples include real-valued
labels denoting the amount of rainfall, the height of a person.
182/2/2019 Presented By: Tekendra Nath Yogi
Contd…
192/2/2019 Presented By: Tekendra Nath Yogi
Least Mean square Regression algorithm
• For a given data build the predicted output O close to the standard output y
i.e., t (target) as:
• f(x)= O =w0 + w1 x1 + … + wn xn
• Train (adjust or choose) the wi’s such that they minimize the squared error
– E[w1,…,wn] = ½ i=1 to m (ti-oi)2
• Determine the output for new input based on the lien which has minimum
value of initial seared error.
202/2/2019 Presented By: Tekendra Nath Yogi
Gradient Descent Learning Rule
• Gradient descent algorithm starts with some initial guess for weights
(coefficients) Wi and repeatedly performs the update until convergence :
• Wi= Wi - l* E[w]
• Where,
– l is learning rate or step size or rate of weight adjustment.
– Gradient:
E[w]=[E/w0,… E/wn]
E[wi]=/wi 1/2i=1 to m(ti-oi)2
= /wi 1/2i=1 to m(ti-i wi xi)2
= i=1 to m(ti- oi)(-xi)
212/2/2019 Presented By: Tekendra Nath Yogi
Contd…
• For a single training example up date rule is:
• Wi= Wi +l*(ti- oi)(xi)
• This is called LMS update rule
222/2/2019 Presented By: Tekendra Nath Yogi
Contd…
Gradient-Descent(training_examples, l)
Each training example is a pair of the form <(x1,…xn),t> where (x1,…,xn) is
input values, and t is the target output value, l is the learning rate (e.g. 0.1)
• Initialize each wi to some small random value
• Until the termination condition is met, Do
– Initialize each wi to zero
– For each <(x1,…xn),t> in training_examples Do
• Input the instance (x1,…,xn) to the linear function and compute the
output o
• Calculate change in weight
– wi= l* (t-o) xi
– For each weight wi Do
• wi=wi+wi
232/2/2019 Presented By: Tekendra Nath Yogi
Contd…
• If we have more than one training examples:
• Two way to modify this gradient descent algorithm for a
training set of more than one example.
– Batch gradient descent
– Stochastic(Incremental ) gradient descent
242/2/2019 Presented By: Tekendra Nath Yogi
Contd…
• Batch mode : gradient descent
– This method looks at every example in the entire training set
on every step. So it is static.
– Algorithm
Repeat until convergene
{
wi=wi + l* i=1 to m (ti- oi)xi over the entire data D and for each weight
}
252/2/2019 Presented By: Tekendra Nath Yogi
Contd…
• Stochastic (Incremental mode: )gradient descent
– This algorithm repeatedly run through the training set and each time
encounter a training example. So it is dynamic.
– Algorithm:
for(i=1 to m)
{
wi=wi +l* (t-o) xi for each weight
}
262/2/2019 Presented By: Tekendra Nath Yogi
Contd…
• Comparison between batch gradient descent and stochastic
gradient descent:
272/2/2019 Presented By: Tekendra Nath Yogi
BGD SGD
For a single step take entire
training set
For a single step take only single
training example
So a costly operation It does not matter how large m is
Slower Faster
Not preferred Used when m is large
Artificial Neural Network
• A neural network is composed of number of nodes or units , connected by
links. Each link has a numeric weight associated with it.
• Artificial neural networks are programs design to solve any problem by
trying to mimic the structure and the function of our nervous system.
282/2/2019 Presented By: Tekendra Nath Yogi
2
1
3
4
5
6
Input
Layer
Hidden
Layer Output
Layer
Artificial neural network model:
• Input to the network are represented by mathematical symbol xn.
• Each of these inputs are multiplied by a connection weight, wn
• These products are simply summed, fed through the transfer function f() to
generate result and output
29
nnxwxwxwsum  ......2211
2/2/2019 Presented By: Tekendra Nath Yogi
Back propagation algorithm
• Back propagation is a neural network learning algorithm. Learns by
adjusting the weight so as to be able to predict the correct class label of
the input.
Input: D training data set and their associated class label
l= learning rate(normally 0.0-1.0)
Output: a trained neural network.
Method:
Step1: initialize all weights and bias in network.
Step2: while termination condition is not satisfied .
For each training tuple x in D
302/2/2019 Presented By: Tekendra Nath Yogi
2.1 calculate output:
For input layer
For hidden layer and output layer
31
jj IO 
e
j
j
jkjk
I
k
j
O
WOI



 
1
1
* 
Contd…
2/2/2019 Presented By: Tekendra Nath Yogi
2.2 Calculate error:
For output layer:
For hidden layer:
2.3 Update weight
2.4 Update bias
32
))(1( jjjjj OOOerr  
)*()1( 
k
jkkjjj WerrOOerr
jiijij errOloldWnewW **)()( 
jjj errl *
Contd…
2/2/2019 Presented By: Tekendra Nath Yogi
2/2/2019 Presented By: Tekendra Nath Yogi 33
Contd…
• Example: Sample calculations for learning by the back-propagation algorithm.
• Figure above shows a multilayer feed-forward neural network. Let the
learning rate be 0.9. The initial weight and bias values of the network are
given in Table below, along with the first training tuple, X = (1, 0, 1), whose
class label is 1.
1
2/2/2019 Presented By: Tekendra Nath Yogi 34
Contd…
• Step1: Initialization
– Let initial input weight and bias values are:
– Initial input:
– Bias values:
– Initial weight:
X1 X2 X3
1 0 1
-0.4 0.2 0.1
4
W14 W15 W24 W25 W34 W35 W46 W56
0.2 -0.3 0.4 0.1 -0.5 0.2 -0.3 -0.2
5 6
2/2/2019 Presented By: Tekendra Nath Yogi 35
Contd…
• 2. Termination condition: Weight of two successive iteration are nearly
equal or user defined number of iterations are reached.
• 2.1For each training tuple X in D, calculate the output for each input layer:
– For input layer:
• O1=I1=1
• O2=I2=0
• O3=I3=1
jj IO 
2/2/2019 Presented By: Tekendra Nath Yogi 36
Contd…
• For hidden layer and output layer:
1.
2.
332.0
)7182.2(1
1
1
1
1
1
0.7-
(-0.5)*10.4*00.2*1
4W34*O3W24*O2W14*O1
)7.0()7.0(4
4
4










 
ee
I
O
I 
525.0
)7182.2(1
1
1
1
1
1
0.1
0.2*10.1*0(-0.3)*1
5W35*O3W25*O2W15*O1
)1.0()1.0(5
5
5











ee
I
O
I 
2/2/2019 Presented By: Tekendra Nath Yogi 37
Contd…
3.
474.0
)7182.2(1
1
1
1
1
1
0.105-
0.1*1(-0.2)*0.5250.3)(*332.0
5W56*O5W46*O4
)105.0()105.0(6
6
6










 

ee
I
O
I 
2/2/2019 Presented By: Tekendra Nath Yogi 38
Contd…
• Calculation of error at each node:
• For output layer:
• For Hidden layer:
1311.0)474.01)(474.01(474.0
))(1(
))(1(
66666



OOOerr
OOOerr jjjjj


0087.0
))3.0(*1311.0)(332.01(332.0
)*)(1(
)*)(1(
0065.0
))2.0(*1311.0)(525.01(525.0
)*)(1(
)*)(1(
)*()1(
46644
56655








 
WerrOO
WerrOOerr
WerrOO
WerrOOerr
WerrOOerr
•
jkkjjj
•
jkkjjj
jkkjjj
k
2/2/2019 Presented By: Tekendra Nath Yogi 39
Contd…
• Update the weight:
1.
2.
3.
jiijij errOloldWnewW **)()( 
26.0
332.0*1311.0*9.03.0
**)()( 644646


 errOloldWnewW
138.0
525.0*1311.0*9.02.0
**)()( 655656


 errOloldWnewW
192.0
1*0087.0*9.02.0
**)()( 411414


 errOloldWnewW
2/2/2019 Presented By: Tekendra Nath Yogi 40
Contd…
4.
5.
6.
4.0
0*0087.0(*9.04.0
**)()( 422424


 errOloldWnewW
306.0
1*)0065.0(*9.03.0
**)()( 511515


 errOloldWnewW
508.0
1*0087.0(*9.05.0
**)()( 433434


 errOloldWnewW
2/2/2019 Presented By: Tekendra Nath Yogi 41
Contd…
7.
8.
194.0
1*)0065.0(*9.02.0
**)()( 533535


 errOloldWnewW
1.0
0*)0065.0(*9.01.0
**)()( 522525


 errOloldWnewW
2/2/2019 Presented By: Tekendra Nath Yogi 42
Contd…
• Update the Bias value:
1.
2.
3. And so on until convergence!
jjj errl *
218.0
1311.0*9.01.0
* 6)(66


 errlold
195.0
)0065.0(*9.02.0
* 5)(55


 errlold
408.0
)0087.0(*9.0)4.0(
* 4)(46


 errlold
Hebbian Learning Algorithm
1. Take number of steps = Number of inputs(X).
2. In each step
I. Calculate Net input = weight(W) * input(X)
II. Calculate Net Output (O)= f(net input)
III. Calculate change in weight = learning rate * O* X
IV. New weight = old weight + change in weight
3. Repeat step2 until terminating condition is satisfied
2/2/2019 43Presented By: Tekendra Nath Yogi
Competitive Learning Rule (Winner-takes-all)
44Presented By: Tekendra Nath Yogi
Basic Concept of Competitive Network:
• This network is just like a single layer feed-forward network with feedback
connection between outputs.
•The feed- forward connections are excitatory types.
•While the feed-back connections between outputs are inhibitory type, shown by
dotted lines, which means the competitors never support themselves.
2/2/2019
Basic Concept of Competitive Learning Rule:
• It is unsupervised learning algorithm in which the output
nodes try to compete with each other to represent the input
pattern.
• Hence, the main concept is that during training, the output unit
with the highest Net-input to a given input, will be declared
the winner.
• Then only the winning neuron’ s weights are updated and the
rest of the neurons are left unchanged.
• Therefore , This rule is also called Winner-takes-all
45Presented By: Tekendra Nath Yogi2/2/2019
Competitive learning algorithm
• For n number of input and m number of output
• Repeat Until convergence
• Calculate the Net input
• The outputs of the network are determined by a ``winner-take-all'' competition such
that only the output node receiving the maximal net input will output 1 while all
others output 0:
• Calculate the change in weight:
• Update the weight:
Where,
2/2/2019 46Presented By: Tekendra Nath Yogi
Unit: 5 Genetic Algorithm
Presented By : Tekendra Nath Yogi
Tekendranath@gmail.com
College Of Applied Business And Technology
Copying ideas of Nature
2/2/2019 47Presented By: Tekendra Nath Yogi
Introduction to Genetic Algorithms
• Nature has always been a great source of inspiration to all mankind.
• A genetic algorithm is a search heuristic that is inspired by Charles Darwin’s
theory of natural evolution.
“Select The Best, Discard The Rest”
• i.e., the fittest individuals are selected for reproduction in order to produce
offspring of the next generation.
• GAs were developed by John Holland and his students and colleagues at the
University of Michigan
2/2/2019 48Presented By: Tekendra Nath Yogi
Notion of Natural Selection
• The process of natural selection starts with the selection of fittest
individuals from a population. They produce offspring which inherit the
characteristics of the parents and will be added to the next generation.
• If parents have better fitness, their offspring will be better than parents
and have a better chance at surviving.
• This process keeps on iterating and at the end, a generation with the
fittest individuals will be found.
• This notion can be applied for a search problem. We consider a set of
solutions for a problem and select the set of best ones out of them.
2/2/2019 49Presented By: Tekendra Nath Yogi
Basic Terminology
• Population: set of individuals each representing a possible solution to a
given problem.
• Gene: a solution to problem represented as a set of parameters ,these
parameters known as genes.
• Chromosome: genes joined together to form a string of values called
chromosome.
• Fitness Function: measures the quality of solution. It is problem dependent.
2/2/2019 50Presented By: Tekendra Nath Yogi
51
Nature to Computer Mapping
Nature Computer
Population
Individual
Fitness
Chromosome
Gene
Reproduction
Set of solutions.
Solution to a problem.
Quality of a solution.
Encoding for a Solution.
Part of the encoding of a solution.
Crossover
2/2/2019 Presented By: Tekendra Nath Yogi
Five phases are considered in a genetic algorithm.
2/2/2019 52Presented By: Tekendra Nath Yogi
Simple Genetic Algorithm
Simple_Genetic_Algorithm()
{
Initialize the Population;
Calculate Fitness Function;
While(Fitness Value != Optimal Value)
{
Selection;//Natural Selection, Survival Of Fittest
Crossover;//Reproduction, Propagate favorable characteristics
Mutation;//Mutation
Calculate Fitness Function;
}
}
2/2/2019 53Presented By: Tekendra Nath Yogi
Initialization of Population
• There are two primary methods to initialize a population in a GA.
They are −
• Random Initialization
• Heuristic initialization
• There are several things to be kept in mind when dealing with GA
population −
– The diversity of the population should be maintained otherwise it might
lead to premature convergence.
– The population size should not be kept very large as it can cause a GA
to slow down, while a smaller population might not be enough for a
good mating pool. Therefore, an optimal population size needs to be
decided by trial and error.
2/2/2019 54Presented By: Tekendra Nath Yogi
Fitness Function
• The fitness function which takes a candidate solution to the
problem as input and produces as output how fit an individual is
(the ability of an individual to compete with other individuals).
• A fitness function should possess the following characteristics −
– The fitness function should be sufficiently fast to compute.
– It must quantitatively measure how fit a given solution is or how fit
individuals can be produced from the given solution.
2/2/2019 55Presented By: Tekendra Nath Yogi
Selection
• The idea of selection phase is to select the fittest individuals and let
them pass their genes to the next generation.
• Two pairs of individuals (parents) are selected based on their fitness
scores. Individuals with high fitness have more chance to be selected for
reproduction.
• Selection of the parents can be done by using any one of the following
strategy:
– Fitness Proportionate Selection(Roulette Wheel Selection)
– Tournament Selection
– Rank Selection
2/2/2019 56Presented By: Tekendra Nath Yogi
Contd..
• Fitness Proportionate Selection(Roulette Wheel Selection):
– Chance of selection is directly propositional to the fitness value. This is chance not
fixed so, any one can become parent.
2/2/2019 57Presented By: Tekendra Nath Yogi
Contd..
• Tournament Selection: Take randomly k individuals and select only
one among k to make the parent of the next generation.
2/2/2019 58Presented By: Tekendra Nath Yogi
Contd..
• Rank Selection: mostly used when the individuals in the population have very
close fitness values . This leads to each individual having an almost equal share
of the pie chart. and hence each individual no matter how fit relative to each
other has an approximately same probability of getting selected as a parent. This
in turn leads to a loss in the selection pressure towards fitter individuals, making
the GA to make poor parent selections in such situations.
2/2/2019 59Presented By: Tekendra Nath Yogi
Contd..
• Rank Selection: every individual in the population is ranked
according to their fitness. The higher ranked individuals are
preferred more than the lower ranked ones.
2/2/2019 60Presented By: Tekendra Nath Yogi
Crossover
• Crossover is the most significant phase in a genetic algorithm. For
each pair of parents to be mated, a crossover point is chosen at
random from within the genes.
• For example, consider the crossover point to be 3 as shown below.
2/2/2019 61Presented By: Tekendra Nath Yogi
Contd..
• Offspring are created by exchanging the genes of parents among
themselves until the crossover point is reached.
• The new offspring are added to the population
2/2/2019 62Presented By: Tekendra Nath Yogi
Mutation
• Now if you think in the biological sense, are the children produced have the
same traits as their parents? The answer is NO. During their growth, there is
some change in the genes of children which makes them different from its
parents.
• This process is known as mutation, which may be defined as a random tweak in
the chromosome.
• This implies that some of the bits in the bit string can be flipped.
• Mutation occurs to maintain diversity within the population and prevent from
the premature convergence.2/2/2019 63Presented By: Tekendra Nath Yogi
Termination
• The algorithm terminates:
– If the population has converged (does not produce offspring which are
significantly different from the previous generation). Then it is said that the
genetic algorithm has provided a set of solutions to our problem.
– OR If the predefined number of generation is reached.
2/2/2019 64Presented By: Tekendra Nath Yogi
Issues for GA Practitioners
• Choosing basic implementation issues:
– representation
– population size, mutation rate, ...
– selection, deletion policies
– crossover, mutation operators
• Termination Criteria
2/2/2019 65Presented By: Tekendra Nath Yogi
Benefits of Genetic Algorithms
• Concept is easy to understand
• Modular, separate from application
• Supports multi-objective optimization
• Good for “noisy” environments
• Always an answer; answer gets better with time
• Easy to exploit previous or alternate solutions
• Flexible building blocks for hybrid applications
• Substantial history and range of use
2/2/2019 66Presented By: Tekendra Nath Yogi
When to Use a GA
• Alternate solutions are too slow or overly complicated
• Need an exploratory tool to examine new approaches
• Problem is similar to one that has already been successfully solved by using
a GA
• Want to hybridize with an existing solution
• Benefits of the GA technology meet key problem requirements
2/2/2019 67Presented By: Tekendra Nath Yogi
Thank You !
Presented By: Tekendra Nath Yogi 682/2/2019

More Related Content

What's hot

Problem Formulation in Artificial Inteligence Projects
Problem Formulation in Artificial Inteligence ProjectsProblem Formulation in Artificial Inteligence Projects
Problem Formulation in Artificial Inteligence ProjectsDr. C.V. Suresh Babu
 
From_seq2seq_to_BERT
From_seq2seq_to_BERTFrom_seq2seq_to_BERT
From_seq2seq_to_BERTHuali Zhao
 
Machine learning Algorithms with a Sagemaker demo
Machine learning Algorithms with a Sagemaker demoMachine learning Algorithms with a Sagemaker demo
Machine learning Algorithms with a Sagemaker demoHridyesh Bisht
 
ANP-GP Approach for Selection of Software Architecture Styles
ANP-GP Approach for Selection of Software Architecture StylesANP-GP Approach for Selection of Software Architecture Styles
ANP-GP Approach for Selection of Software Architecture StylesWaqas Tariq
 
Machine learning
Machine learningMachine learning
Machine learningRohit Kumar
 
Evaluation of subjective answers using glsa enhanced with contextual synonymy
Evaluation of subjective answers using glsa enhanced with contextual synonymyEvaluation of subjective answers using glsa enhanced with contextual synonymy
Evaluation of subjective answers using glsa enhanced with contextual synonymyijnlc
 
CSA 3702 machine learning module 1
CSA 3702 machine learning module 1CSA 3702 machine learning module 1
CSA 3702 machine learning module 1Nandhini S
 
Particle Swarm Optimization in the fine-tuning of Fuzzy Software Cost Estimat...
Particle Swarm Optimization in the fine-tuning of Fuzzy Software Cost Estimat...Particle Swarm Optimization in the fine-tuning of Fuzzy Software Cost Estimat...
Particle Swarm Optimization in the fine-tuning of Fuzzy Software Cost Estimat...Waqas Tariq
 
a deep reinforced model for abstractive summarization
a deep reinforced model for abstractive summarizationa deep reinforced model for abstractive summarization
a deep reinforced model for abstractive summarizationJEE HYUN PARK
 
Internship project presentation_final_upload
Internship project presentation_final_uploadInternship project presentation_final_upload
Internship project presentation_final_uploadSuraj Rathore
 
Machine Learning basics
Machine Learning basicsMachine Learning basics
Machine Learning basicsNeeleEilers
 
acai01-updated.ppt
acai01-updated.pptacai01-updated.ppt
acai01-updated.pptbutest
 
DagdelenSiriwardaneY..
DagdelenSiriwardaneY..DagdelenSiriwardaneY..
DagdelenSiriwardaneY..butest
 
Deep learning Introduction and Basics
Deep learning  Introduction and BasicsDeep learning  Introduction and Basics
Deep learning Introduction and BasicsNitin Mishra
 
An exhaustive survey of reinforcement learning with hierarchical structure
An exhaustive survey of reinforcement learning with hierarchical structureAn exhaustive survey of reinforcement learning with hierarchical structure
An exhaustive survey of reinforcement learning with hierarchical structureeSAT Journals
 

What's hot (19)

Problem Formulation in Artificial Inteligence Projects
Problem Formulation in Artificial Inteligence ProjectsProblem Formulation in Artificial Inteligence Projects
Problem Formulation in Artificial Inteligence Projects
 
From_seq2seq_to_BERT
From_seq2seq_to_BERTFrom_seq2seq_to_BERT
From_seq2seq_to_BERT
 
Machine learning Algorithms with a Sagemaker demo
Machine learning Algorithms with a Sagemaker demoMachine learning Algorithms with a Sagemaker demo
Machine learning Algorithms with a Sagemaker demo
 
ANP-GP Approach for Selection of Software Architecture Styles
ANP-GP Approach for Selection of Software Architecture StylesANP-GP Approach for Selection of Software Architecture Styles
ANP-GP Approach for Selection of Software Architecture Styles
 
Machine learning
Machine learningMachine learning
Machine learning
 
Evaluation of subjective answers using glsa enhanced with contextual synonymy
Evaluation of subjective answers using glsa enhanced with contextual synonymyEvaluation of subjective answers using glsa enhanced with contextual synonymy
Evaluation of subjective answers using glsa enhanced with contextual synonymy
 
CSA 3702 machine learning module 1
CSA 3702 machine learning module 1CSA 3702 machine learning module 1
CSA 3702 machine learning module 1
 
Ga
GaGa
Ga
 
Particle Swarm Optimization in the fine-tuning of Fuzzy Software Cost Estimat...
Particle Swarm Optimization in the fine-tuning of Fuzzy Software Cost Estimat...Particle Swarm Optimization in the fine-tuning of Fuzzy Software Cost Estimat...
Particle Swarm Optimization in the fine-tuning of Fuzzy Software Cost Estimat...
 
a deep reinforced model for abstractive summarization
a deep reinforced model for abstractive summarizationa deep reinforced model for abstractive summarization
a deep reinforced model for abstractive summarization
 
Internship project presentation_final_upload
Internship project presentation_final_uploadInternship project presentation_final_upload
Internship project presentation_final_upload
 
Machine Learning
Machine LearningMachine Learning
Machine Learning
 
Machine Learning basics
Machine Learning basicsMachine Learning basics
Machine Learning basics
 
acai01-updated.ppt
acai01-updated.pptacai01-updated.ppt
acai01-updated.ppt
 
DagdelenSiriwardaneY..
DagdelenSiriwardaneY..DagdelenSiriwardaneY..
DagdelenSiriwardaneY..
 
Deep learning Introduction and Basics
Deep learning  Introduction and BasicsDeep learning  Introduction and Basics
Deep learning Introduction and Basics
 
Chapter 5 of 1
Chapter 5 of 1Chapter 5 of 1
Chapter 5 of 1
 
An exhaustive survey of reinforcement learning with hierarchical structure
An exhaustive survey of reinforcement learning with hierarchical structureAn exhaustive survey of reinforcement learning with hierarchical structure
An exhaustive survey of reinforcement learning with hierarchical structure
 
Daa chapter4
Daa chapter4Daa chapter4
Daa chapter4
 

Similar to Unit5: Learning

BIM Data Mining Unit3 by Tekendra Nath Yogi
 BIM Data Mining Unit3 by Tekendra Nath Yogi BIM Data Mining Unit3 by Tekendra Nath Yogi
BIM Data Mining Unit3 by Tekendra Nath YogiTekendra Nath Yogi
 
ML_basics_lecture1_linear_regression.pdf
ML_basics_lecture1_linear_regression.pdfML_basics_lecture1_linear_regression.pdf
ML_basics_lecture1_linear_regression.pdfTigabu Yaya
 
Machine Learning Using Python
Machine Learning Using PythonMachine Learning Using Python
Machine Learning Using PythonSavitaHanchinal
 
master defense hyun-wong choi_2019_05_14_rev19
master defense hyun-wong choi_2019_05_14_rev19master defense hyun-wong choi_2019_05_14_rev19
master defense hyun-wong choi_2019_05_14_rev19Hyun Wong Choi
 
defense hyun-wong choi_2019_05_14_rev18
defense hyun-wong choi_2019_05_14_rev18defense hyun-wong choi_2019_05_14_rev18
defense hyun-wong choi_2019_05_14_rev18Hyun Wong Choi
 
master defense hyun-wong choi_2019_05_14_rev19
master defense hyun-wong choi_2019_05_14_rev19master defense hyun-wong choi_2019_05_14_rev19
master defense hyun-wong choi_2019_05_14_rev19Hyun Wong Choi
 
master defense hyun-wong choi_2019_05_14_rev19
master defense hyun-wong choi_2019_05_14_rev19master defense hyun-wong choi_2019_05_14_rev19
master defense hyun-wong choi_2019_05_14_rev19Hyun Wong Choi
 
Reinforcement learning
Reinforcement learningReinforcement learning
Reinforcement learningDongHyun Kwak
 
Final edited master defense-hyun_wong choi_2019_05_23_rev21
Final edited master defense-hyun_wong choi_2019_05_23_rev21Final edited master defense-hyun_wong choi_2019_05_23_rev21
Final edited master defense-hyun_wong choi_2019_05_23_rev21Hyun Wong Choi
 
An introduction to reinforcement learning
An introduction to  reinforcement learningAn introduction to  reinforcement learning
An introduction to reinforcement learningJie-Han Chen
 
Learning Strategy with Groups on Page Based Students' Profiles
Learning Strategy with Groups on Page Based Students' ProfilesLearning Strategy with Groups on Page Based Students' Profiles
Learning Strategy with Groups on Page Based Students' Profilesaciijournal
 
Neural Learning to Rank
Neural Learning to RankNeural Learning to Rank
Neural Learning to RankBhaskar Mitra
 
Learning strategy with groups on page based students' profiles
Learning strategy with groups on page based students' profilesLearning strategy with groups on page based students' profiles
Learning strategy with groups on page based students' profilesaciijournal
 
Multilayer Perceptron (DLAI D1L2 2017 UPC Deep Learning for Artificial Intell...
Multilayer Perceptron (DLAI D1L2 2017 UPC Deep Learning for Artificial Intell...Multilayer Perceptron (DLAI D1L2 2017 UPC Deep Learning for Artificial Intell...
Multilayer Perceptron (DLAI D1L2 2017 UPC Deep Learning for Artificial Intell...Universitat Politècnica de Catalunya
 

Similar to Unit5: Learning (20)

BIM Data Mining Unit3 by Tekendra Nath Yogi
 BIM Data Mining Unit3 by Tekendra Nath Yogi BIM Data Mining Unit3 by Tekendra Nath Yogi
BIM Data Mining Unit3 by Tekendra Nath Yogi
 
Lecture 1.pptx
Lecture 1.pptxLecture 1.pptx
Lecture 1.pptx
 
ML_basics_lecture1_linear_regression.pdf
ML_basics_lecture1_linear_regression.pdfML_basics_lecture1_linear_regression.pdf
ML_basics_lecture1_linear_regression.pdf
 
Week 9: Programming for Data Analysis
Week 9: Programming for Data AnalysisWeek 9: Programming for Data Analysis
Week 9: Programming for Data Analysis
 
Review_Cibe Sridharan
Review_Cibe SridharanReview_Cibe Sridharan
Review_Cibe Sridharan
 
Machine Learning Using Python
Machine Learning Using PythonMachine Learning Using Python
Machine Learning Using Python
 
master defense hyun-wong choi_2019_05_14_rev19
master defense hyun-wong choi_2019_05_14_rev19master defense hyun-wong choi_2019_05_14_rev19
master defense hyun-wong choi_2019_05_14_rev19
 
defense hyun-wong choi_2019_05_14_rev18
defense hyun-wong choi_2019_05_14_rev18defense hyun-wong choi_2019_05_14_rev18
defense hyun-wong choi_2019_05_14_rev18
 
master defense hyun-wong choi_2019_05_14_rev19
master defense hyun-wong choi_2019_05_14_rev19master defense hyun-wong choi_2019_05_14_rev19
master defense hyun-wong choi_2019_05_14_rev19
 
master defense hyun-wong choi_2019_05_14_rev19
master defense hyun-wong choi_2019_05_14_rev19master defense hyun-wong choi_2019_05_14_rev19
master defense hyun-wong choi_2019_05_14_rev19
 
Reinforcement learning
Reinforcement learningReinforcement learning
Reinforcement learning
 
Final edited master defense-hyun_wong choi_2019_05_23_rev21
Final edited master defense-hyun_wong choi_2019_05_23_rev21Final edited master defense-hyun_wong choi_2019_05_23_rev21
Final edited master defense-hyun_wong choi_2019_05_23_rev21
 
An introduction to reinforcement learning
An introduction to  reinforcement learningAn introduction to  reinforcement learning
An introduction to reinforcement learning
 
IC2IT 2013 Presentation
IC2IT 2013 PresentationIC2IT 2013 Presentation
IC2IT 2013 Presentation
 
IC2IT 2013 Presentation
IC2IT 2013 PresentationIC2IT 2013 Presentation
IC2IT 2013 Presentation
 
Learning Strategy with Groups on Page Based Students' Profiles
Learning Strategy with Groups on Page Based Students' ProfilesLearning Strategy with Groups on Page Based Students' Profiles
Learning Strategy with Groups on Page Based Students' Profiles
 
Neural Learning to Rank
Neural Learning to RankNeural Learning to Rank
Neural Learning to Rank
 
Learning
LearningLearning
Learning
 
Learning strategy with groups on page based students' profiles
Learning strategy with groups on page based students' profilesLearning strategy with groups on page based students' profiles
Learning strategy with groups on page based students' profiles
 
Multilayer Perceptron (DLAI D1L2 2017 UPC Deep Learning for Artificial Intell...
Multilayer Perceptron (DLAI D1L2 2017 UPC Deep Learning for Artificial Intell...Multilayer Perceptron (DLAI D1L2 2017 UPC Deep Learning for Artificial Intell...
Multilayer Perceptron (DLAI D1L2 2017 UPC Deep Learning for Artificial Intell...
 

More from Tekendra Nath Yogi

Unit4: Knowledge Representation
Unit4: Knowledge RepresentationUnit4: Knowledge Representation
Unit4: Knowledge RepresentationTekendra Nath Yogi
 
BIM Data Mining Unit5 by Tekendra Nath Yogi
 BIM Data Mining Unit5 by Tekendra Nath Yogi BIM Data Mining Unit5 by Tekendra Nath Yogi
BIM Data Mining Unit5 by Tekendra Nath YogiTekendra Nath Yogi
 
BIM Data Mining Unit4 by Tekendra Nath Yogi
 BIM Data Mining Unit4 by Tekendra Nath Yogi BIM Data Mining Unit4 by Tekendra Nath Yogi
BIM Data Mining Unit4 by Tekendra Nath YogiTekendra Nath Yogi
 
BIM Data Mining Unit2 by Tekendra Nath Yogi
 BIM Data Mining Unit2 by Tekendra Nath Yogi BIM Data Mining Unit2 by Tekendra Nath Yogi
BIM Data Mining Unit2 by Tekendra Nath YogiTekendra Nath Yogi
 
BIM Data Mining Unit1 by Tekendra Nath Yogi
 BIM Data Mining Unit1 by Tekendra Nath Yogi BIM Data Mining Unit1 by Tekendra Nath Yogi
BIM Data Mining Unit1 by Tekendra Nath YogiTekendra Nath Yogi
 
B. SC CSIT Computer Graphics Unit 5 By Tekendra Nath Yogi
B. SC CSIT Computer Graphics Unit 5 By Tekendra Nath YogiB. SC CSIT Computer Graphics Unit 5 By Tekendra Nath Yogi
B. SC CSIT Computer Graphics Unit 5 By Tekendra Nath YogiTekendra Nath Yogi
 
B. SC CSIT Computer Graphics Lab By Tekendra Nath Yogi
B. SC CSIT Computer Graphics Lab By Tekendra Nath YogiB. SC CSIT Computer Graphics Lab By Tekendra Nath Yogi
B. SC CSIT Computer Graphics Lab By Tekendra Nath YogiTekendra Nath Yogi
 
B. SC CSIT Computer Graphics Unit 4 By Tekendra Nath Yogi
B. SC CSIT Computer Graphics Unit 4 By Tekendra Nath YogiB. SC CSIT Computer Graphics Unit 4 By Tekendra Nath Yogi
B. SC CSIT Computer Graphics Unit 4 By Tekendra Nath YogiTekendra Nath Yogi
 
B. SC CSIT Computer Graphics Unit 3 By Tekendra Nath Yogi
B. SC CSIT Computer Graphics Unit 3 By Tekendra Nath YogiB. SC CSIT Computer Graphics Unit 3 By Tekendra Nath Yogi
B. SC CSIT Computer Graphics Unit 3 By Tekendra Nath YogiTekendra Nath Yogi
 
B. SC CSIT Computer Graphics Unit 2 By Tekendra Nath Yogi
B. SC CSIT Computer Graphics Unit 2 By Tekendra Nath YogiB. SC CSIT Computer Graphics Unit 2 By Tekendra Nath Yogi
B. SC CSIT Computer Graphics Unit 2 By Tekendra Nath YogiTekendra Nath Yogi
 
B. SC CSIT Computer Graphics Unit 1.3 By Tekendra Nath Yogi
B. SC CSIT Computer Graphics Unit 1.3 By Tekendra Nath YogiB. SC CSIT Computer Graphics Unit 1.3 By Tekendra Nath Yogi
B. SC CSIT Computer Graphics Unit 1.3 By Tekendra Nath YogiTekendra Nath Yogi
 
B. SC CSIT Computer Graphics Unit1.2 By Tekendra Nath Yogi
B. SC CSIT Computer Graphics Unit1.2 By Tekendra Nath YogiB. SC CSIT Computer Graphics Unit1.2 By Tekendra Nath Yogi
B. SC CSIT Computer Graphics Unit1.2 By Tekendra Nath YogiTekendra Nath Yogi
 
B. SC CSIT Computer Graphics Unit1.1 By Tekendra Nath Yogi
B. SC CSIT Computer Graphics Unit1.1 By Tekendra Nath YogiB. SC CSIT Computer Graphics Unit1.1 By Tekendra Nath Yogi
B. SC CSIT Computer Graphics Unit1.1 By Tekendra Nath YogiTekendra Nath Yogi
 
B.sc CSIT 2nd semester C++ Unit7
B.sc CSIT  2nd semester C++ Unit7B.sc CSIT  2nd semester C++ Unit7
B.sc CSIT 2nd semester C++ Unit7Tekendra Nath Yogi
 

More from Tekendra Nath Yogi (20)

Unit4: Knowledge Representation
Unit4: Knowledge RepresentationUnit4: Knowledge Representation
Unit4: Knowledge Representation
 
Unit2: Agents and Environment
Unit2: Agents and EnvironmentUnit2: Agents and Environment
Unit2: Agents and Environment
 
Unit10
Unit10Unit10
Unit10
 
Unit9
Unit9Unit9
Unit9
 
Unit8
Unit8Unit8
Unit8
 
Unit7
Unit7Unit7
Unit7
 
BIM Data Mining Unit5 by Tekendra Nath Yogi
 BIM Data Mining Unit5 by Tekendra Nath Yogi BIM Data Mining Unit5 by Tekendra Nath Yogi
BIM Data Mining Unit5 by Tekendra Nath Yogi
 
BIM Data Mining Unit4 by Tekendra Nath Yogi
 BIM Data Mining Unit4 by Tekendra Nath Yogi BIM Data Mining Unit4 by Tekendra Nath Yogi
BIM Data Mining Unit4 by Tekendra Nath Yogi
 
BIM Data Mining Unit2 by Tekendra Nath Yogi
 BIM Data Mining Unit2 by Tekendra Nath Yogi BIM Data Mining Unit2 by Tekendra Nath Yogi
BIM Data Mining Unit2 by Tekendra Nath Yogi
 
BIM Data Mining Unit1 by Tekendra Nath Yogi
 BIM Data Mining Unit1 by Tekendra Nath Yogi BIM Data Mining Unit1 by Tekendra Nath Yogi
BIM Data Mining Unit1 by Tekendra Nath Yogi
 
Unit6
Unit6Unit6
Unit6
 
B. SC CSIT Computer Graphics Unit 5 By Tekendra Nath Yogi
B. SC CSIT Computer Graphics Unit 5 By Tekendra Nath YogiB. SC CSIT Computer Graphics Unit 5 By Tekendra Nath Yogi
B. SC CSIT Computer Graphics Unit 5 By Tekendra Nath Yogi
 
B. SC CSIT Computer Graphics Lab By Tekendra Nath Yogi
B. SC CSIT Computer Graphics Lab By Tekendra Nath YogiB. SC CSIT Computer Graphics Lab By Tekendra Nath Yogi
B. SC CSIT Computer Graphics Lab By Tekendra Nath Yogi
 
B. SC CSIT Computer Graphics Unit 4 By Tekendra Nath Yogi
B. SC CSIT Computer Graphics Unit 4 By Tekendra Nath YogiB. SC CSIT Computer Graphics Unit 4 By Tekendra Nath Yogi
B. SC CSIT Computer Graphics Unit 4 By Tekendra Nath Yogi
 
B. SC CSIT Computer Graphics Unit 3 By Tekendra Nath Yogi
B. SC CSIT Computer Graphics Unit 3 By Tekendra Nath YogiB. SC CSIT Computer Graphics Unit 3 By Tekendra Nath Yogi
B. SC CSIT Computer Graphics Unit 3 By Tekendra Nath Yogi
 
B. SC CSIT Computer Graphics Unit 2 By Tekendra Nath Yogi
B. SC CSIT Computer Graphics Unit 2 By Tekendra Nath YogiB. SC CSIT Computer Graphics Unit 2 By Tekendra Nath Yogi
B. SC CSIT Computer Graphics Unit 2 By Tekendra Nath Yogi
 
B. SC CSIT Computer Graphics Unit 1.3 By Tekendra Nath Yogi
B. SC CSIT Computer Graphics Unit 1.3 By Tekendra Nath YogiB. SC CSIT Computer Graphics Unit 1.3 By Tekendra Nath Yogi
B. SC CSIT Computer Graphics Unit 1.3 By Tekendra Nath Yogi
 
B. SC CSIT Computer Graphics Unit1.2 By Tekendra Nath Yogi
B. SC CSIT Computer Graphics Unit1.2 By Tekendra Nath YogiB. SC CSIT Computer Graphics Unit1.2 By Tekendra Nath Yogi
B. SC CSIT Computer Graphics Unit1.2 By Tekendra Nath Yogi
 
B. SC CSIT Computer Graphics Unit1.1 By Tekendra Nath Yogi
B. SC CSIT Computer Graphics Unit1.1 By Tekendra Nath YogiB. SC CSIT Computer Graphics Unit1.1 By Tekendra Nath Yogi
B. SC CSIT Computer Graphics Unit1.1 By Tekendra Nath Yogi
 
B.sc CSIT 2nd semester C++ Unit7
B.sc CSIT  2nd semester C++ Unit7B.sc CSIT  2nd semester C++ Unit7
B.sc CSIT 2nd semester C++ Unit7
 

Recently uploaded

Botany 4th semester file By Sumit Kumar yadav.pdf
Botany 4th semester file By Sumit Kumar yadav.pdfBotany 4th semester file By Sumit Kumar yadav.pdf
Botany 4th semester file By Sumit Kumar yadav.pdfSumit Kumar yadav
 
Stunning ➥8448380779▻ Call Girls In Panchshil Enclave Delhi NCR
Stunning ➥8448380779▻ Call Girls In Panchshil Enclave Delhi NCRStunning ➥8448380779▻ Call Girls In Panchshil Enclave Delhi NCR
Stunning ➥8448380779▻ Call Girls In Panchshil Enclave Delhi NCRDelhi Call girls
 
GBSN - Microbiology (Unit 2)
GBSN - Microbiology (Unit 2)GBSN - Microbiology (Unit 2)
GBSN - Microbiology (Unit 2)Areesha Ahmad
 
Recombination DNA Technology (Nucleic Acid Hybridization )
Recombination DNA Technology (Nucleic Acid Hybridization )Recombination DNA Technology (Nucleic Acid Hybridization )
Recombination DNA Technology (Nucleic Acid Hybridization )aarthirajkumar25
 
Animal Communication- Auditory and Visual.pptx
Animal Communication- Auditory and Visual.pptxAnimal Communication- Auditory and Visual.pptx
Animal Communication- Auditory and Visual.pptxUmerFayaz5
 
Discovery of an Accretion Streamer and a Slow Wide-angle Outflow around FUOri...
Discovery of an Accretion Streamer and a Slow Wide-angle Outflow around FUOri...Discovery of an Accretion Streamer and a Slow Wide-angle Outflow around FUOri...
Discovery of an Accretion Streamer and a Slow Wide-angle Outflow around FUOri...Sérgio Sacani
 
VIRUSES structure and classification ppt by Dr.Prince C P
VIRUSES structure and classification ppt by Dr.Prince C PVIRUSES structure and classification ppt by Dr.Prince C P
VIRUSES structure and classification ppt by Dr.Prince C PPRINCE C P
 
Botany 4th semester series (krishna).pdf
Botany 4th semester series (krishna).pdfBotany 4th semester series (krishna).pdf
Botany 4th semester series (krishna).pdfSumit Kumar yadav
 
Formation of low mass protostars and their circumstellar disks
Formation of low mass protostars and their circumstellar disksFormation of low mass protostars and their circumstellar disks
Formation of low mass protostars and their circumstellar disksSérgio Sacani
 
TEST BANK For Radiologic Science for Technologists, 12th Edition by Stewart C...
TEST BANK For Radiologic Science for Technologists, 12th Edition by Stewart C...TEST BANK For Radiologic Science for Technologists, 12th Edition by Stewart C...
TEST BANK For Radiologic Science for Technologists, 12th Edition by Stewart C...ssifa0344
 
Nanoparticles synthesis and characterization​ ​
Nanoparticles synthesis and characterization​  ​Nanoparticles synthesis and characterization​  ​
Nanoparticles synthesis and characterization​ ​kaibalyasahoo82800
 
Zoology 4th semester series (krishna).pdf
Zoology 4th semester series (krishna).pdfZoology 4th semester series (krishna).pdf
Zoology 4th semester series (krishna).pdfSumit Kumar yadav
 
Isotopic evidence of long-lived volcanism on Io
Isotopic evidence of long-lived volcanism on IoIsotopic evidence of long-lived volcanism on Io
Isotopic evidence of long-lived volcanism on IoSérgio Sacani
 
STERILITY TESTING OF PHARMACEUTICALS ppt by DR.C.P.PRINCE
STERILITY TESTING OF PHARMACEUTICALS ppt by DR.C.P.PRINCESTERILITY TESTING OF PHARMACEUTICALS ppt by DR.C.P.PRINCE
STERILITY TESTING OF PHARMACEUTICALS ppt by DR.C.P.PRINCEPRINCE C P
 
Biological Classification BioHack (3).pdf
Biological Classification BioHack (3).pdfBiological Classification BioHack (3).pdf
Biological Classification BioHack (3).pdfmuntazimhurra
 
Green chemistry and Sustainable development.pptx
Green chemistry  and Sustainable development.pptxGreen chemistry  and Sustainable development.pptx
Green chemistry and Sustainable development.pptxRajatChauhan518211
 
DIFFERENCE IN BACK CROSS AND TEST CROSS
DIFFERENCE IN  BACK CROSS AND TEST CROSSDIFFERENCE IN  BACK CROSS AND TEST CROSS
DIFFERENCE IN BACK CROSS AND TEST CROSSLeenakshiTyagi
 

Recently uploaded (20)

Botany 4th semester file By Sumit Kumar yadav.pdf
Botany 4th semester file By Sumit Kumar yadav.pdfBotany 4th semester file By Sumit Kumar yadav.pdf
Botany 4th semester file By Sumit Kumar yadav.pdf
 
Stunning ➥8448380779▻ Call Girls In Panchshil Enclave Delhi NCR
Stunning ➥8448380779▻ Call Girls In Panchshil Enclave Delhi NCRStunning ➥8448380779▻ Call Girls In Panchshil Enclave Delhi NCR
Stunning ➥8448380779▻ Call Girls In Panchshil Enclave Delhi NCR
 
The Philosophy of Science
The Philosophy of ScienceThe Philosophy of Science
The Philosophy of Science
 
GBSN - Microbiology (Unit 2)
GBSN - Microbiology (Unit 2)GBSN - Microbiology (Unit 2)
GBSN - Microbiology (Unit 2)
 
Recombination DNA Technology (Nucleic Acid Hybridization )
Recombination DNA Technology (Nucleic Acid Hybridization )Recombination DNA Technology (Nucleic Acid Hybridization )
Recombination DNA Technology (Nucleic Acid Hybridization )
 
Animal Communication- Auditory and Visual.pptx
Animal Communication- Auditory and Visual.pptxAnimal Communication- Auditory and Visual.pptx
Animal Communication- Auditory and Visual.pptx
 
Discovery of an Accretion Streamer and a Slow Wide-angle Outflow around FUOri...
Discovery of an Accretion Streamer and a Slow Wide-angle Outflow around FUOri...Discovery of an Accretion Streamer and a Slow Wide-angle Outflow around FUOri...
Discovery of an Accretion Streamer and a Slow Wide-angle Outflow around FUOri...
 
9953056974 Young Call Girls In Mahavir enclave Indian Quality Escort service
9953056974 Young Call Girls In Mahavir enclave Indian Quality Escort service9953056974 Young Call Girls In Mahavir enclave Indian Quality Escort service
9953056974 Young Call Girls In Mahavir enclave Indian Quality Escort service
 
CELL -Structural and Functional unit of life.pdf
CELL -Structural and Functional unit of life.pdfCELL -Structural and Functional unit of life.pdf
CELL -Structural and Functional unit of life.pdf
 
VIRUSES structure and classification ppt by Dr.Prince C P
VIRUSES structure and classification ppt by Dr.Prince C PVIRUSES structure and classification ppt by Dr.Prince C P
VIRUSES structure and classification ppt by Dr.Prince C P
 
Botany 4th semester series (krishna).pdf
Botany 4th semester series (krishna).pdfBotany 4th semester series (krishna).pdf
Botany 4th semester series (krishna).pdf
 
Formation of low mass protostars and their circumstellar disks
Formation of low mass protostars and their circumstellar disksFormation of low mass protostars and their circumstellar disks
Formation of low mass protostars and their circumstellar disks
 
TEST BANK For Radiologic Science for Technologists, 12th Edition by Stewart C...
TEST BANK For Radiologic Science for Technologists, 12th Edition by Stewart C...TEST BANK For Radiologic Science for Technologists, 12th Edition by Stewart C...
TEST BANK For Radiologic Science for Technologists, 12th Edition by Stewart C...
 
Nanoparticles synthesis and characterization​ ​
Nanoparticles synthesis and characterization​  ​Nanoparticles synthesis and characterization​  ​
Nanoparticles synthesis and characterization​ ​
 
Zoology 4th semester series (krishna).pdf
Zoology 4th semester series (krishna).pdfZoology 4th semester series (krishna).pdf
Zoology 4th semester series (krishna).pdf
 
Isotopic evidence of long-lived volcanism on Io
Isotopic evidence of long-lived volcanism on IoIsotopic evidence of long-lived volcanism on Io
Isotopic evidence of long-lived volcanism on Io
 
STERILITY TESTING OF PHARMACEUTICALS ppt by DR.C.P.PRINCE
STERILITY TESTING OF PHARMACEUTICALS ppt by DR.C.P.PRINCESTERILITY TESTING OF PHARMACEUTICALS ppt by DR.C.P.PRINCE
STERILITY TESTING OF PHARMACEUTICALS ppt by DR.C.P.PRINCE
 
Biological Classification BioHack (3).pdf
Biological Classification BioHack (3).pdfBiological Classification BioHack (3).pdf
Biological Classification BioHack (3).pdf
 
Green chemistry and Sustainable development.pptx
Green chemistry  and Sustainable development.pptxGreen chemistry  and Sustainable development.pptx
Green chemistry and Sustainable development.pptx
 
DIFFERENCE IN BACK CROSS AND TEST CROSS
DIFFERENCE IN  BACK CROSS AND TEST CROSSDIFFERENCE IN  BACK CROSS AND TEST CROSS
DIFFERENCE IN BACK CROSS AND TEST CROSS
 

Unit5: Learning

  • 1. Unit 5: Learning LH 6 Presented By : Tekendra Nath Yogi Tekendranath@gmail.com College Of Applied Business And Technology 2/2/2019 1Presented By: Tekendra Nath Yogi
  • 2. Contd… • Outline: – 5.1 Why learning? – 5.2 Supervised (Error based) learning • 5.2.1 Gradient descent learning: Least Mean Square, Back Propagation algorithm • 5.2.2 Stochastic learning – 5.3 Unsupervised learning • 5.3.1 Hebbian learning algorithm • 5.3.2 Competitive learning – 5.4 Reinforced learning (output based) – 5.5 Genetic algorithms: operators 22/2/2019 Presented By: Tekendra Nath Yogi
  • 3. What is learning • Learning is the process of acquiring new or modifying existing knowledge, behaviors, skills, values, or preferences. • Evidence that learning has occurred may be seen in changes in behavior from simple to complex. 32/2/2019 Presented By: Tekendra Nath Yogi
  • 4. What is Machine learning? • The machine Learning denotes changes in the system that are adaptive in the sense that they enable the system to do the same task more effectively the next time. • Like human learning from past experiences, computer system learns from data, which represent some “past experiences” of an application domain. • Therefore, Machine learning gives computers the ability to learn without being explicitly programmed. 42/2/2019 Presented By: Tekendra Nath Yogi
  • 5. Why machine learning? • To learn a target function (relation between input and output)that can be used to predict the values of a discrete class attribute, – e.g., male or female, and high-risk or low risk, etc. • To model the underlying structure or distribution in the data in order to learn more about the data. • To learns behavior through trial-and-error interactions with a dynamic environment. 52/2/2019 Presented By: Tekendra Nath Yogi
  • 6. Types of Machine Learning • Based on training set machine learning algorithms are classified into the following three categories: 62/2/2019 Presented By: Tekendra Nath Yogi
  • 7. Supervised learning • Supervised learning is where you have input variables (x) and an output variable (Y) and you use an algorithm to learn the mapping function from the input to the output. Y = f(X) • Learning stops when the algorithm achieves an acceptable level of performance. • when you have new input data (x) that you can predict the output variables (Y) for that data. f 2/2/2019 7Presented By: Tekendra Nath Yogi
  • 8. Supervised learning • It is called supervised learning because the process of an algorithm learning from the training dataset can be thought of as a teacher supervising the learning process. • We know the correct answers, the algorithm iteratively makes predictions on the training data and is corrected by the teacher. 2/2/2019 8Presented By: Tekendra Nath Yogi
  • 9. Supervised Learning Example 1 2/2/2019 9Presented By: Tekendra Nath Yogi
  • 10. Application Example of Supervised learning • Suppose we have a dataset giving the living areas and prices of house from KTM: 2/2/2019 10Presented By: Tekendra Nath Yogi
  • 11. Application Example of Supervised learning • We can plot this data: 2/2/2019 11Presented By: Tekendra Nath Yogi
  • 12. Application Example of Supervised learning • Given a data like this , we can learn to predict the price of other houses in KTM as a function of the size of their area, such types of learning is known as supervised learning. • Price of house = f(area of house) i.e., y= f(x) 2/2/2019 12Presented By: Tekendra Nath Yogi
  • 13. Unsupervised Learning • Unsupervised learning is where you only have input data (X) and no corresponding output variables(targets/ labels). • The goal for unsupervised learning is to model the underlying structure or distribution in the data in order to learn more about the data. • These are called unsupervised learning because unlike supervised learning above there is no correct answers and there is no teacher. • Algorithms are left to their own devises to discover and present the interesting structure in the data. • E.g., Clustering 2/2/2019 13Presented By: Tekendra Nath Yogi
  • 14. Unsupervised learning Example1 2/2/2019 14Presented By: Tekendra Nath Yogi
  • 15. Unsupervised learning example 2 2/2/2019 15Presented By: Tekendra Nath Yogi
  • 16. Reinforcement learning • Is learning behavior through trial-and-error interactions with a environment. • Is learning how to act in order to maximize a reward (Encouragements). • Reinforcement learning emphasizes learning feedback that evaluates the learner's performance without providing standards of correctness in the form of behavioral targets. • Example: Bicycle learning, game playing, etc. 2/2/2019 16Presented By: Tekendra Nath Yogi
  • 17. Contd… 172/2/2019 Presented By: Tekendra Nath Yogi
  • 18. Supervised learning Algorithm • Classification: – To predict the outcome of a given sample where the output variable is in the form of categories(discrete). Examples include labels such as male and female, sick and healthy. • Regression: – To predict the outcome of a given sample where the output variable is in the form of real values(continuous). Examples include real-valued labels denoting the amount of rainfall, the height of a person. 182/2/2019 Presented By: Tekendra Nath Yogi
  • 19. Contd… 192/2/2019 Presented By: Tekendra Nath Yogi
  • 20. Least Mean square Regression algorithm • For a given data build the predicted output O close to the standard output y i.e., t (target) as: • f(x)= O =w0 + w1 x1 + … + wn xn • Train (adjust or choose) the wi’s such that they minimize the squared error – E[w1,…,wn] = ½ i=1 to m (ti-oi)2 • Determine the output for new input based on the lien which has minimum value of initial seared error. 202/2/2019 Presented By: Tekendra Nath Yogi
  • 21. Gradient Descent Learning Rule • Gradient descent algorithm starts with some initial guess for weights (coefficients) Wi and repeatedly performs the update until convergence : • Wi= Wi - l* E[w] • Where, – l is learning rate or step size or rate of weight adjustment. – Gradient: E[w]=[E/w0,… E/wn] E[wi]=/wi 1/2i=1 to m(ti-oi)2 = /wi 1/2i=1 to m(ti-i wi xi)2 = i=1 to m(ti- oi)(-xi) 212/2/2019 Presented By: Tekendra Nath Yogi
  • 22. Contd… • For a single training example up date rule is: • Wi= Wi +l*(ti- oi)(xi) • This is called LMS update rule 222/2/2019 Presented By: Tekendra Nath Yogi
  • 23. Contd… Gradient-Descent(training_examples, l) Each training example is a pair of the form <(x1,…xn),t> where (x1,…,xn) is input values, and t is the target output value, l is the learning rate (e.g. 0.1) • Initialize each wi to some small random value • Until the termination condition is met, Do – Initialize each wi to zero – For each <(x1,…xn),t> in training_examples Do • Input the instance (x1,…,xn) to the linear function and compute the output o • Calculate change in weight – wi= l* (t-o) xi – For each weight wi Do • wi=wi+wi 232/2/2019 Presented By: Tekendra Nath Yogi
  • 24. Contd… • If we have more than one training examples: • Two way to modify this gradient descent algorithm for a training set of more than one example. – Batch gradient descent – Stochastic(Incremental ) gradient descent 242/2/2019 Presented By: Tekendra Nath Yogi
  • 25. Contd… • Batch mode : gradient descent – This method looks at every example in the entire training set on every step. So it is static. – Algorithm Repeat until convergene { wi=wi + l* i=1 to m (ti- oi)xi over the entire data D and for each weight } 252/2/2019 Presented By: Tekendra Nath Yogi
  • 26. Contd… • Stochastic (Incremental mode: )gradient descent – This algorithm repeatedly run through the training set and each time encounter a training example. So it is dynamic. – Algorithm: for(i=1 to m) { wi=wi +l* (t-o) xi for each weight } 262/2/2019 Presented By: Tekendra Nath Yogi
  • 27. Contd… • Comparison between batch gradient descent and stochastic gradient descent: 272/2/2019 Presented By: Tekendra Nath Yogi BGD SGD For a single step take entire training set For a single step take only single training example So a costly operation It does not matter how large m is Slower Faster Not preferred Used when m is large
  • 28. Artificial Neural Network • A neural network is composed of number of nodes or units , connected by links. Each link has a numeric weight associated with it. • Artificial neural networks are programs design to solve any problem by trying to mimic the structure and the function of our nervous system. 282/2/2019 Presented By: Tekendra Nath Yogi 2 1 3 4 5 6 Input Layer Hidden Layer Output Layer
  • 29. Artificial neural network model: • Input to the network are represented by mathematical symbol xn. • Each of these inputs are multiplied by a connection weight, wn • These products are simply summed, fed through the transfer function f() to generate result and output 29 nnxwxwxwsum  ......2211 2/2/2019 Presented By: Tekendra Nath Yogi
  • 30. Back propagation algorithm • Back propagation is a neural network learning algorithm. Learns by adjusting the weight so as to be able to predict the correct class label of the input. Input: D training data set and their associated class label l= learning rate(normally 0.0-1.0) Output: a trained neural network. Method: Step1: initialize all weights and bias in network. Step2: while termination condition is not satisfied . For each training tuple x in D 302/2/2019 Presented By: Tekendra Nath Yogi
  • 31. 2.1 calculate output: For input layer For hidden layer and output layer 31 jj IO  e j j jkjk I k j O WOI      1 1 *  Contd… 2/2/2019 Presented By: Tekendra Nath Yogi
  • 32. 2.2 Calculate error: For output layer: For hidden layer: 2.3 Update weight 2.4 Update bias 32 ))(1( jjjjj OOOerr   )*()1(  k jkkjjj WerrOOerr jiijij errOloldWnewW **)()(  jjj errl * Contd… 2/2/2019 Presented By: Tekendra Nath Yogi
  • 33. 2/2/2019 Presented By: Tekendra Nath Yogi 33 Contd… • Example: Sample calculations for learning by the back-propagation algorithm. • Figure above shows a multilayer feed-forward neural network. Let the learning rate be 0.9. The initial weight and bias values of the network are given in Table below, along with the first training tuple, X = (1, 0, 1), whose class label is 1. 1
  • 34. 2/2/2019 Presented By: Tekendra Nath Yogi 34 Contd… • Step1: Initialization – Let initial input weight and bias values are: – Initial input: – Bias values: – Initial weight: X1 X2 X3 1 0 1 -0.4 0.2 0.1 4 W14 W15 W24 W25 W34 W35 W46 W56 0.2 -0.3 0.4 0.1 -0.5 0.2 -0.3 -0.2 5 6
  • 35. 2/2/2019 Presented By: Tekendra Nath Yogi 35 Contd… • 2. Termination condition: Weight of two successive iteration are nearly equal or user defined number of iterations are reached. • 2.1For each training tuple X in D, calculate the output for each input layer: – For input layer: • O1=I1=1 • O2=I2=0 • O3=I3=1 jj IO 
  • 36. 2/2/2019 Presented By: Tekendra Nath Yogi 36 Contd… • For hidden layer and output layer: 1. 2. 332.0 )7182.2(1 1 1 1 1 1 0.7- (-0.5)*10.4*00.2*1 4W34*O3W24*O2W14*O1 )7.0()7.0(4 4 4             ee I O I  525.0 )7182.2(1 1 1 1 1 1 0.1 0.2*10.1*0(-0.3)*1 5W35*O3W25*O2W15*O1 )1.0()1.0(5 5 5            ee I O I 
  • 37. 2/2/2019 Presented By: Tekendra Nath Yogi 37 Contd… 3. 474.0 )7182.2(1 1 1 1 1 1 0.105- 0.1*1(-0.2)*0.5250.3)(*332.0 5W56*O5W46*O4 )105.0()105.0(6 6 6              ee I O I 
  • 38. 2/2/2019 Presented By: Tekendra Nath Yogi 38 Contd… • Calculation of error at each node: • For output layer: • For Hidden layer: 1311.0)474.01)(474.01(474.0 ))(1( ))(1( 66666    OOOerr OOOerr jjjjj   0087.0 ))3.0(*1311.0)(332.01(332.0 )*)(1( )*)(1( 0065.0 ))2.0(*1311.0)(525.01(525.0 )*)(1( )*)(1( )*()1( 46644 56655           WerrOO WerrOOerr WerrOO WerrOOerr WerrOOerr • jkkjjj • jkkjjj jkkjjj k
  • 39. 2/2/2019 Presented By: Tekendra Nath Yogi 39 Contd… • Update the weight: 1. 2. 3. jiijij errOloldWnewW **)()(  26.0 332.0*1311.0*9.03.0 **)()( 644646    errOloldWnewW 138.0 525.0*1311.0*9.02.0 **)()( 655656    errOloldWnewW 192.0 1*0087.0*9.02.0 **)()( 411414    errOloldWnewW
  • 40. 2/2/2019 Presented By: Tekendra Nath Yogi 40 Contd… 4. 5. 6. 4.0 0*0087.0(*9.04.0 **)()( 422424    errOloldWnewW 306.0 1*)0065.0(*9.03.0 **)()( 511515    errOloldWnewW 508.0 1*0087.0(*9.05.0 **)()( 433434    errOloldWnewW
  • 41. 2/2/2019 Presented By: Tekendra Nath Yogi 41 Contd… 7. 8. 194.0 1*)0065.0(*9.02.0 **)()( 533535    errOloldWnewW 1.0 0*)0065.0(*9.01.0 **)()( 522525    errOloldWnewW
  • 42. 2/2/2019 Presented By: Tekendra Nath Yogi 42 Contd… • Update the Bias value: 1. 2. 3. And so on until convergence! jjj errl * 218.0 1311.0*9.01.0 * 6)(66    errlold 195.0 )0065.0(*9.02.0 * 5)(55    errlold 408.0 )0087.0(*9.0)4.0( * 4)(46    errlold
  • 43. Hebbian Learning Algorithm 1. Take number of steps = Number of inputs(X). 2. In each step I. Calculate Net input = weight(W) * input(X) II. Calculate Net Output (O)= f(net input) III. Calculate change in weight = learning rate * O* X IV. New weight = old weight + change in weight 3. Repeat step2 until terminating condition is satisfied 2/2/2019 43Presented By: Tekendra Nath Yogi
  • 44. Competitive Learning Rule (Winner-takes-all) 44Presented By: Tekendra Nath Yogi Basic Concept of Competitive Network: • This network is just like a single layer feed-forward network with feedback connection between outputs. •The feed- forward connections are excitatory types. •While the feed-back connections between outputs are inhibitory type, shown by dotted lines, which means the competitors never support themselves. 2/2/2019
  • 45. Basic Concept of Competitive Learning Rule: • It is unsupervised learning algorithm in which the output nodes try to compete with each other to represent the input pattern. • Hence, the main concept is that during training, the output unit with the highest Net-input to a given input, will be declared the winner. • Then only the winning neuron’ s weights are updated and the rest of the neurons are left unchanged. • Therefore , This rule is also called Winner-takes-all 45Presented By: Tekendra Nath Yogi2/2/2019
  • 46. Competitive learning algorithm • For n number of input and m number of output • Repeat Until convergence • Calculate the Net input • The outputs of the network are determined by a ``winner-take-all'' competition such that only the output node receiving the maximal net input will output 1 while all others output 0: • Calculate the change in weight: • Update the weight: Where, 2/2/2019 46Presented By: Tekendra Nath Yogi
  • 47. Unit: 5 Genetic Algorithm Presented By : Tekendra Nath Yogi Tekendranath@gmail.com College Of Applied Business And Technology Copying ideas of Nature 2/2/2019 47Presented By: Tekendra Nath Yogi
  • 48. Introduction to Genetic Algorithms • Nature has always been a great source of inspiration to all mankind. • A genetic algorithm is a search heuristic that is inspired by Charles Darwin’s theory of natural evolution. “Select The Best, Discard The Rest” • i.e., the fittest individuals are selected for reproduction in order to produce offspring of the next generation. • GAs were developed by John Holland and his students and colleagues at the University of Michigan 2/2/2019 48Presented By: Tekendra Nath Yogi
  • 49. Notion of Natural Selection • The process of natural selection starts with the selection of fittest individuals from a population. They produce offspring which inherit the characteristics of the parents and will be added to the next generation. • If parents have better fitness, their offspring will be better than parents and have a better chance at surviving. • This process keeps on iterating and at the end, a generation with the fittest individuals will be found. • This notion can be applied for a search problem. We consider a set of solutions for a problem and select the set of best ones out of them. 2/2/2019 49Presented By: Tekendra Nath Yogi
  • 50. Basic Terminology • Population: set of individuals each representing a possible solution to a given problem. • Gene: a solution to problem represented as a set of parameters ,these parameters known as genes. • Chromosome: genes joined together to form a string of values called chromosome. • Fitness Function: measures the quality of solution. It is problem dependent. 2/2/2019 50Presented By: Tekendra Nath Yogi
  • 51. 51 Nature to Computer Mapping Nature Computer Population Individual Fitness Chromosome Gene Reproduction Set of solutions. Solution to a problem. Quality of a solution. Encoding for a Solution. Part of the encoding of a solution. Crossover 2/2/2019 Presented By: Tekendra Nath Yogi
  • 52. Five phases are considered in a genetic algorithm. 2/2/2019 52Presented By: Tekendra Nath Yogi
  • 53. Simple Genetic Algorithm Simple_Genetic_Algorithm() { Initialize the Population; Calculate Fitness Function; While(Fitness Value != Optimal Value) { Selection;//Natural Selection, Survival Of Fittest Crossover;//Reproduction, Propagate favorable characteristics Mutation;//Mutation Calculate Fitness Function; } } 2/2/2019 53Presented By: Tekendra Nath Yogi
  • 54. Initialization of Population • There are two primary methods to initialize a population in a GA. They are − • Random Initialization • Heuristic initialization • There are several things to be kept in mind when dealing with GA population − – The diversity of the population should be maintained otherwise it might lead to premature convergence. – The population size should not be kept very large as it can cause a GA to slow down, while a smaller population might not be enough for a good mating pool. Therefore, an optimal population size needs to be decided by trial and error. 2/2/2019 54Presented By: Tekendra Nath Yogi
  • 55. Fitness Function • The fitness function which takes a candidate solution to the problem as input and produces as output how fit an individual is (the ability of an individual to compete with other individuals). • A fitness function should possess the following characteristics − – The fitness function should be sufficiently fast to compute. – It must quantitatively measure how fit a given solution is or how fit individuals can be produced from the given solution. 2/2/2019 55Presented By: Tekendra Nath Yogi
  • 56. Selection • The idea of selection phase is to select the fittest individuals and let them pass their genes to the next generation. • Two pairs of individuals (parents) are selected based on their fitness scores. Individuals with high fitness have more chance to be selected for reproduction. • Selection of the parents can be done by using any one of the following strategy: – Fitness Proportionate Selection(Roulette Wheel Selection) – Tournament Selection – Rank Selection 2/2/2019 56Presented By: Tekendra Nath Yogi
  • 57. Contd.. • Fitness Proportionate Selection(Roulette Wheel Selection): – Chance of selection is directly propositional to the fitness value. This is chance not fixed so, any one can become parent. 2/2/2019 57Presented By: Tekendra Nath Yogi
  • 58. Contd.. • Tournament Selection: Take randomly k individuals and select only one among k to make the parent of the next generation. 2/2/2019 58Presented By: Tekendra Nath Yogi
  • 59. Contd.. • Rank Selection: mostly used when the individuals in the population have very close fitness values . This leads to each individual having an almost equal share of the pie chart. and hence each individual no matter how fit relative to each other has an approximately same probability of getting selected as a parent. This in turn leads to a loss in the selection pressure towards fitter individuals, making the GA to make poor parent selections in such situations. 2/2/2019 59Presented By: Tekendra Nath Yogi
  • 60. Contd.. • Rank Selection: every individual in the population is ranked according to their fitness. The higher ranked individuals are preferred more than the lower ranked ones. 2/2/2019 60Presented By: Tekendra Nath Yogi
  • 61. Crossover • Crossover is the most significant phase in a genetic algorithm. For each pair of parents to be mated, a crossover point is chosen at random from within the genes. • For example, consider the crossover point to be 3 as shown below. 2/2/2019 61Presented By: Tekendra Nath Yogi
  • 62. Contd.. • Offspring are created by exchanging the genes of parents among themselves until the crossover point is reached. • The new offspring are added to the population 2/2/2019 62Presented By: Tekendra Nath Yogi
  • 63. Mutation • Now if you think in the biological sense, are the children produced have the same traits as their parents? The answer is NO. During their growth, there is some change in the genes of children which makes them different from its parents. • This process is known as mutation, which may be defined as a random tweak in the chromosome. • This implies that some of the bits in the bit string can be flipped. • Mutation occurs to maintain diversity within the population and prevent from the premature convergence.2/2/2019 63Presented By: Tekendra Nath Yogi
  • 64. Termination • The algorithm terminates: – If the population has converged (does not produce offspring which are significantly different from the previous generation). Then it is said that the genetic algorithm has provided a set of solutions to our problem. – OR If the predefined number of generation is reached. 2/2/2019 64Presented By: Tekendra Nath Yogi
  • 65. Issues for GA Practitioners • Choosing basic implementation issues: – representation – population size, mutation rate, ... – selection, deletion policies – crossover, mutation operators • Termination Criteria 2/2/2019 65Presented By: Tekendra Nath Yogi
  • 66. Benefits of Genetic Algorithms • Concept is easy to understand • Modular, separate from application • Supports multi-objective optimization • Good for “noisy” environments • Always an answer; answer gets better with time • Easy to exploit previous or alternate solutions • Flexible building blocks for hybrid applications • Substantial history and range of use 2/2/2019 66Presented By: Tekendra Nath Yogi
  • 67. When to Use a GA • Alternate solutions are too slow or overly complicated • Need an exploratory tool to examine new approaches • Problem is similar to one that has already been successfully solved by using a GA • Want to hybridize with an existing solution • Benefits of the GA technology meet key problem requirements 2/2/2019 67Presented By: Tekendra Nath Yogi
  • 68. Thank You ! Presented By: Tekendra Nath Yogi 682/2/2019