SlideShare a Scribd company logo
1 of 50
Machine Learning Techniques
M. Lilly Florence
Adhiyamaan College of Engineering
Hosur
Content
• Learning
• Types of Machine Learning
• Supervised Learning
• The Brain and the Neuron
• Design a Learning System
• Perspectives and Issues in Machine Learning
• Concept Learning as Task
• Concept Learning as Search
• Finding a Maximally Specific Hypothesis
• Version Spaces and the Candidate Elimination Algorithm
• Linear Discriminants
• Perceptron
• Linear Separability
• Linear Regression
Learning
• It is said that the term machine learning was first coined by Arthur Lee
Samuel, a pioneer in the AI field, in 1959.
• “Machine learning is the field of study that gives computers the ability
to learn without being explicitly programmed. — Arthur L. Samuel,
AI pioneer, 1959”.
• A computer program is said to learn from experience E with respect to
some class of tasks T and performance measure P, if its performance at
tasks in T, as measured by P, improves with experience E. — Tom
Mitchell, Machine Learning Professor at Carnegie Mellon University
• To illustrate this quote with an example, consider the problem of
recognizing handwritten digits:
• Task T: classifying handwritten digits from images
• Performance measure P : percentage of digits classified correctly
• Training experience E: dataset of digits given classifications,
Why “Learn” ?
4
• Machine learning is programming computers to optimize a
performance criterion using example data or past experience.
• There is no need to “learn” to calculate payroll
• Learning is used when:
• Human expertise does not exist (navigating on Mars),
• Humans are unable to explain their expertise (speech recognition)
• Solution changes in time (routing on a computer network)
• Solution needs to be adapted to particular cases (user biometrics)
Basic components of learning process
• Four components, namely, data storage, abstraction, generalization and
evaluation.
• 1. Data storage - Facilities for storing and retrieving huge amounts of data
are an important component of the learning process
• 2. Abstraction - Abstraction is the process of extracting knowledge about
stored data. This involves creating general concepts about the data as a
whole. The creation of knowledge involves application of known models
and creation of new models. The process of fitting a model to a dataset is
known as training. When the model has been trained, the data is
transformed into an abstract form that summarizes the original
information.
• 3. Generalization - The term generalization describes the process of turning
the knowledge about stored data into a form that can be utilized for future
action.
• 4. Evaluation - It is the process of giving feedback to the user to measure
the utility of the learn
Learning Model
• The basic idea of Learning models has divided into three categories.
• Using a Logical expression. (Logical models)
• Using the Geometry of the instance space. (Geometric models)
• Using Probability to classify the instance space. (Probabilistic models
Applications of Machine Learning
• Email spam detection
• Face detection and matching (e.g., iPhone X)
• Web search (e.g., DuckDuckGo, Bing, Google)
• Sports predictions
• Post office (e.g., sorting letters by zip codes)
• ATMs (e.g., reading checks)
• Credit card fraud
• Stock predictions
• Smart assistants (Apple Siri, Amazon Alexa, . . . )
• Product recommendations (e.g., Netflix, Amazon)
• Self-driving cars (e.g., Uber, Tesla)
• Language translation (Google translate)
• Sentiment analysis
• Drug design
• Medical diagnose
Types of Machine Learning
• The three broad categories of machine learning are summarized in
the following figure:
• Supervised learning
• Unsupervised learning and
• Reinforcement learning
• Evolutionary learning
Types of Machine Learning
Supervised learning
• Supervised learning is the subcategory of machine learning that
focuses on learning a classification or regression model, that is,
learning from labeled training data.
• Classification
• Regression
The Brain and the Neuron
• Brain
• Nerve Cell-Neuron
• Each neuron is typically connected to thousands of other neurons, so that it is
estimated that there are about 100 trillion (= 1014) synapses within the brain.
After firing, the neuron must wait for some time to recover its energy (the
refractory period) before it can fire again
• Hebb’s Rule - rule says that the changes in the strength of synaptic connections
are proportional to the correlation in the firing of the two connecting neurons. So
if two neurons consistently fire simultaneously, then any connection between
them will change in strength, becoming stronger.
• There are other names for this idea that synaptic connections between neurons
and assemblies of neurons can be formed when they fire together and can
become stronger. It is also known as long-term potentiation and neural plasticity,
and it does appear to have correlates in real brains.
The Brain and the Neuron
• McCulloch and Pitts Neurons
• Studying neurons isn’t actually that easy, able to extract the neuron from the
brain, and then keep it alive so that you can see how it reacts in controlled
circumstances.
Designing a Learning System
• The design choices has the following key components:
1. Type of training experience – Direct/Indirect,
Supervised/Unsupervised
2. Choosing the Target Function
3. Choosing a representation for the Target Function
4. Choosing an approximation algorithm for the Target Function
5. The final Design
Designing a Learning System
Real-world examples of machine learning problems include
“Is this cancer?”,
“What is the market value of this house?”,
“Which of these people are good friends with each other?”,
“Will this rocket engine explode on take off?”,
“Will this person like this movie?”,
“Who is this?”, “What did you say?”, and
“How do you fly this thing?” All of these problems are excellent targets for
an ML project; in fact ML has been applied to each of them with great
success.
PERSPECTIVES AND ISSUES IN MACHINE
LEARNING
Issues in Machine Learning
• What algorithms exist for learning general target functions from specific training examples? In what
settings will particular algorithms converge to the desired function, given sufficient training data? Which
algorithms perform best for which types of problems and representations?
• How much training data is sufficient? What general bounds can be found to relate the confidence in
learned hypotheses to the amount of training experience and the character of the learner's hypothesis space?
• When and how can prior knowledge held by the learner guide the process of generalizing from examples?
Can prior knowledge be helpful even when it is only approximately correct?
• What is the best strategy for choosing a useful next training experience, and how does the choice of this
strategy alter the complexity of the learning problem?
• What is the best way to reduce the learning task to one or more function approximation problems? Put
another way, what specific functions should the system attempt to learn? Can this process itself be automated?
• How can the learner automatically alter its representation to improve its ability to represent and learn the
target function?
Enjoysport examples
Concept Learning as Search
• The goal of this search is to find the hypothesis that best fits the
training examples.
• By selecting a hypothesis representation, the designer of the learning
algorithm implicitly defines the space of all hypotheses that the
program can ever represent and therefore can ever learn.
• Consider, for example,the instances X and hypotheses H in the
EnjoySport learning task. In learning as a search problem, it is natural
that our study of learning algorithms will examine the different
strategies for searching the hypothesis space.
Concept Learning as Search
• General-to-Specific Ordering of Hypotheses
• To illustrate the general-to-specific ordering, consider the two
hypotheses
• h1 = (Sunny, ?, ?, Strong, ?, ?)
• h2=(Sunny,?,?,?,?,?)
• Now consider the sets of instances that are classified positive by hl
and by h2. Because h2 imposes fewer constraints on the instance, it
classifies more instances as positive. In fact, any instance classified
positive by h1 will also be classified positive by h2. Therefore, we say
that h2 is more general than h1.
• First, for any instance x in X and hypothesis h in H, we say that x
satisfies h if and only if h(x) = 1
Concept Learning as Search
•
Finding a Maximally Specific Hypothesis
Three main concepts;
• Concept Learning
• General Hypothesis
• Specific Hypothesis
• A hypothesis, h, is a most specific hypothesis if it covers none of the
negative examples and there is no other hypothesis h′ that covers no
negative examples, such that h is strictly more general than h′.
Finding a Maximally Specific Hypothesis
• Find-S algorithm finds the most specific hypothesis that fits all the positive
examples.
• Find-S algorithm moves from the most specific hypothesis to the most
general hypothesis.
Important Representation :
• ? indicates that any value is acceptable for the attribute.
• specify a single required value ( e.g., Cold ) for the attribute.
• ϕ indicates that no value is acceptable.
• The most general hypothesis is represented by: {?, ?, ?, ?, ?, ?}
• The most specific hypothesis is represented by : {ϕ, ϕ, ϕ, ϕ, ϕ, ϕ}
•
Find-S Algorithm
Steps Involved In Find-S :
• Start with the most specific hypothesis.
• h = {ϕ, ϕ, ϕ, ϕ, ϕ, ϕ}
• Take the next example and if it is negative, then no changes occur to the
hypothesis.
• If the example is positive and we find that our initial hypothesis is too
specific then we update our current hypothesis to general condition.
• Keep repeating the above steps till all the training examples are complete.
• After we have completed all the training examples we will have the final
hypothesis which can used to classify the new examples.
• First we consider the hypothesis to be more specific hypothesis. Hence, our hypothesis
would be :
h = {ϕ, ϕ, ϕ, ϕ, ϕ, ϕ}
• Consider example 1 :
• The data in example 1 is { GREEN, HARD, NO, WRINKLED }. We see that our initial
hypothesis is more specific and we have to generalize it for this example. Hence, the
hypothesis becomes :
h = { GREEN, HARD, NO, WRINKLED }
• Consider example 2 :
Here we see that this example has a negative outcome. Hence we neglect this example
and our hypothesis remains the same.
h = { GREEN, HARD, NO, WRINKLED }
•
• Consider example 3 :
Here we see that this example has a negative outcome. Hence we neglect this
example and our hypothesis remains the same.
h = { GREEN, HARD, NO, WRINKLED }
• Consider example 4 :
The data present in example 4 is { ORANGE, HARD, NO, WRINKLED }. We compare
every single attribute with the initial data and if any mismatch is found we
replace that particular attribute with general case ( ” ? ” ). After doing the process
the hypothesis becomes :
h = { ?, HARD, NO, WRINKLED }
• Consider example 5 :
The data present in example 5 is { GREEN, SOFT, YES, SMOOTH }. We compare
every single attribute with the initial data and if any mismatch is found we
replace that particular attribute with general case ( ” ? ” ). After doing the process
the hypothesis becomes :
h = { ?, ?, ?, ? }
Since we have reached a point where all the attributes in our hypothesis have the
general condition, the example 6 and example 7 would result in the same
hypothesizes with all general attributes.
h = { ?, ?, ?, ? }
• Hence, for the given data the final hypothesis would be :
Final Hyposthesis: h = { ?, ?, ?, ? }
Version Space
• A version space is a hierarchical representation of knowledge that enables
you to keep track of all the useful information supplied by a sequence of
learning examples without remembering any of the examples.
• The version space method is a concept learning process accomplished by
managing multiple models within a version space.
• Definition (Version space). A concept is complete if it covers all positive
examples.
• A concept is consistent if it covers none of the negative examples. The
version space is the set of all complete and consistent concepts. This set is
convex and is fully defined by its least and most general elements.
Version Space
To represent the version space is simply to list all of its members. This leads to a simple learning
algorithm, which we might call the LIST-THEN ELIMINATE algorithm
• The LIST-THEN-ELIMINATE algorithm first initializes the version space to contain all hypotheses in H,
then eliminates any hypothesis found inconsistent with any training example.
• The version space of candidate hypotheses thus shrinks as more examples are observed, until ideally just
one hypothesis remains that is consistent with all the observed examples.
Version Space
Candidate Elimination Learning Algorithm
Origin Manufacturer Color Decade Type Example Type
Japan Honda Blue 1980 Economy Positive
Japan Toyota Green 1970 Sports Negative
Japan Toyota Blue 1990 Economy Positive
USA Chrysler Red 1980 Economy Negative
Japan Honda White 1980 Economy Positive
Problem 1:
Learning the concept of "Japanese Economy Car"
Features: ( Country of Origin, Manufacturer, Color, Decade, Type )
• Solution:
• 1. Positive Example: (Japan, Honda, Blue, 1980, Economy)
• Initialize G to a singleton set that includes everything.
G = { (?, ?, ?, ?, ?) }
• Initialize S to a singleton set that includes the first positive example.
S = { (Japan, Honda, Blue, 1980, Economy) }
Linear Discriminant Analysis
• In 1936, Ronald A.Fisher formulated Linear Discriminant first time and
showed some practical uses as a classifier, it was described for a 2-class
problem, and later generalized as ‘Multi-class Linear Discriminant Analysis’
or ‘Multiple Discriminant Analysis’ by C.R.Rao in the year 1948.
• Linear Discriminant Analysis is the most commonly used dimensionality
reduction technique in supervised learning. Basically, it is a preprocessing
step for pattern classification and machine learning applications.
• It projects the dataset into moderate dimensional-space with a genuine
class of separable features that minimize overfitting and computational
costs.
Working of Linear Discriminant Analysis - Assumptions
• Every feature either be variable, dimension, or attribute in the dataset
has gaussian distribution, i.e, features have a bell-shaped curve.
• Each feature holds the same variance, and has varying values around
the mean with the same amount on average.
• Each feature is assumed to be sampled randomly.
• Lack of multicollinearity in independent features and there is an
increment in correlations between independent features and the
power of prediction decreases.
LDA achieve this via three step process;
• First step: To compute the separate ability amid various classes,i.e,
the distance between the mean of different classes, that is also
known as between-class variance
• Second Step: To compute the distance among the mean and sample
of each class, that is also known as the within class variance.
• Third step: To create the lower dimensional space that maximizes the
between class variance and minimizes the within class variance.
• Assuming P as the lower dimensional space projection that is known
as Fisher’s criterion.
• For example, LDA can be used as a classification task for speech
recognition, microarray data classification, face recognition, image
retrieval, bioinformatics, biometrics, chemistry, etc.
• https://people.revoledu.com/kardi/tutorial/LDA/Numerical%20Exam
ple.html
Perceptron
• Perceptron is a single layer neural network and a multi-layer
perceptron is called Neural Networks.
• Perceptron is a linear classifier (binary). Also, it is used in supervised
learning.
The perceptron consists of 4 parts.
• Input values or One input layer
• Weights and Bias
• Net sum
• Activation Function
• The perceptron works on these simple steps
a. All the inputs x are multiplied with their weights w. Let’s call it k.
b. Add all the multiplied values and call them Weighted Sum.
C. Apply that weighted sum to the correct Activation Function.
Why do we need Weights and Bias?
• Weights shows the strength of the particular node.
• A bias value allows you to shift the activation function curve up or down.
Why do we need Activation Function?
• In short, the activation functions are used to map the input between the
required values like (0, 1) or (-1, 1).
ML Techniques for Classifying Sports Activities

More Related Content

What's hot

Bayesian Inference and Filtering
Bayesian Inference and FilteringBayesian Inference and Filtering
Bayesian Inference and FilteringEngin Gul
 
相互情報量を用いた独立性の検定
相互情報量を用いた独立性の検定相互情報量を用いた独立性の検定
相互情報量を用いた独立性の検定Joe Suzuki
 
Summer Report on Mathematics for Machine learning: Imperial College of London
Summer Report on Mathematics for Machine learning: Imperial College of LondonSummer Report on Mathematics for Machine learning: Imperial College of London
Summer Report on Mathematics for Machine learning: Imperial College of LondonYash Khanna
 
Dcs lec03 - z-analysis of discrete time control systems
Dcs   lec03 - z-analysis of discrete time control systemsDcs   lec03 - z-analysis of discrete time control systems
Dcs lec03 - z-analysis of discrete time control systemsAmr E. Mohamed
 
Imputation techniques for missing data in clinical trials
Imputation techniques for missing data in clinical trialsImputation techniques for missing data in clinical trials
Imputation techniques for missing data in clinical trialsNitin George
 
カップルが一緒にお風呂に入る割合をベイズ推定してみた
カップルが一緒にお風呂に入る割合をベイズ推定してみたカップルが一緒にお風呂に入る割合をベイズ推定してみた
カップルが一緒にお風呂に入る割合をベイズ推定してみたhoxo_m
 
ベイズ推論とシミュレーション法の基礎
ベイズ推論とシミュレーション法の基礎ベイズ推論とシミュレーション法の基礎
ベイズ推論とシミュレーション法の基礎Tomoshige Nakamura
 
Time Series Forecasting Using Recurrent Neural Network and Vector Autoregress...
Time Series Forecasting Using Recurrent Neural Network and Vector Autoregress...Time Series Forecasting Using Recurrent Neural Network and Vector Autoregress...
Time Series Forecasting Using Recurrent Neural Network and Vector Autoregress...Databricks
 
Soft computing (ANN and Fuzzy Logic) : Dr. Purnima Pandit
Soft computing (ANN and Fuzzy Logic)  : Dr. Purnima PanditSoft computing (ANN and Fuzzy Logic)  : Dr. Purnima Pandit
Soft computing (ANN and Fuzzy Logic) : Dr. Purnima PanditPurnima Pandit
 
Introduction - Lattice-based Cryptography
Introduction - Lattice-based CryptographyIntroduction - Lattice-based Cryptography
Introduction - Lattice-based CryptographyAlexandre Augusto Giron
 
Potential Solutions to the Fundamental Problem of Causal Inference: An Overview
Potential Solutions to the Fundamental Problem of Causal Inference: An OverviewPotential Solutions to the Fundamental Problem of Causal Inference: An Overview
Potential Solutions to the Fundamental Problem of Causal Inference: An OverviewEconomic Research Forum
 
Firefly Algorithms for Multimodal Optimization
Firefly Algorithms for Multimodal OptimizationFirefly Algorithms for Multimodal Optimization
Firefly Algorithms for Multimodal OptimizationXin-She Yang
 
Slides ppt
Slides pptSlides ppt
Slides pptbutest
 
とある金融屋の統計技師が時系列解析してみた
とある金融屋の統計技師が時系列解析してみたとある金融屋の統計技師が時系列解析してみた
とある金融屋の統計技師が時系列解析してみたNagi Teramo
 
PRML 2.3.2-2.3.4 ガウス分布
PRML 2.3.2-2.3.4 ガウス分布PRML 2.3.2-2.3.4 ガウス分布
PRML 2.3.2-2.3.4 ガウス分布Akihiro Nitta
 
Equalization
EqualizationEqualization
Equalizationbhabendu
 
Gram-Schmidt procedure and constellations
Gram-Schmidt procedure and constellationsGram-Schmidt procedure and constellations
Gram-Schmidt procedure and constellationsRohitK71
 
Factor Analysis - Statistics
Factor Analysis - StatisticsFactor Analysis - Statistics
Factor Analysis - StatisticsThiyagu K
 

What's hot (20)

Bayesian Inference and Filtering
Bayesian Inference and FilteringBayesian Inference and Filtering
Bayesian Inference and Filtering
 
相互情報量を用いた独立性の検定
相互情報量を用いた独立性の検定相互情報量を用いた独立性の検定
相互情報量を用いた独立性の検定
 
Summer Report on Mathematics for Machine learning: Imperial College of London
Summer Report on Mathematics for Machine learning: Imperial College of LondonSummer Report on Mathematics for Machine learning: Imperial College of London
Summer Report on Mathematics for Machine learning: Imperial College of London
 
Dcs lec03 - z-analysis of discrete time control systems
Dcs   lec03 - z-analysis of discrete time control systemsDcs   lec03 - z-analysis of discrete time control systems
Dcs lec03 - z-analysis of discrete time control systems
 
Imputation techniques for missing data in clinical trials
Imputation techniques for missing data in clinical trialsImputation techniques for missing data in clinical trials
Imputation techniques for missing data in clinical trials
 
カップルが一緒にお風呂に入る割合をベイズ推定してみた
カップルが一緒にお風呂に入る割合をベイズ推定してみたカップルが一緒にお風呂に入る割合をベイズ推定してみた
カップルが一緒にお風呂に入る割合をベイズ推定してみた
 
ベイズ推論とシミュレーション法の基礎
ベイズ推論とシミュレーション法の基礎ベイズ推論とシミュレーション法の基礎
ベイズ推論とシミュレーション法の基礎
 
Time Series Forecasting Using Recurrent Neural Network and Vector Autoregress...
Time Series Forecasting Using Recurrent Neural Network and Vector Autoregress...Time Series Forecasting Using Recurrent Neural Network and Vector Autoregress...
Time Series Forecasting Using Recurrent Neural Network and Vector Autoregress...
 
Signals And Systems.pdf
Signals And Systems.pdfSignals And Systems.pdf
Signals And Systems.pdf
 
Soft computing (ANN and Fuzzy Logic) : Dr. Purnima Pandit
Soft computing (ANN and Fuzzy Logic)  : Dr. Purnima PanditSoft computing (ANN and Fuzzy Logic)  : Dr. Purnima Pandit
Soft computing (ANN and Fuzzy Logic) : Dr. Purnima Pandit
 
Introduction - Lattice-based Cryptography
Introduction - Lattice-based CryptographyIntroduction - Lattice-based Cryptography
Introduction - Lattice-based Cryptography
 
Potential Solutions to the Fundamental Problem of Causal Inference: An Overview
Potential Solutions to the Fundamental Problem of Causal Inference: An OverviewPotential Solutions to the Fundamental Problem of Causal Inference: An Overview
Potential Solutions to the Fundamental Problem of Causal Inference: An Overview
 
Firefly Algorithms for Multimodal Optimization
Firefly Algorithms for Multimodal OptimizationFirefly Algorithms for Multimodal Optimization
Firefly Algorithms for Multimodal Optimization
 
Slides ppt
Slides pptSlides ppt
Slides ppt
 
とある金融屋の統計技師が時系列解析してみた
とある金融屋の統計技師が時系列解析してみたとある金融屋の統計技師が時系列解析してみた
とある金融屋の統計技師が時系列解析してみた
 
PRML 2.3.2-2.3.4 ガウス分布
PRML 2.3.2-2.3.4 ガウス分布PRML 2.3.2-2.3.4 ガウス分布
PRML 2.3.2-2.3.4 ガウス分布
 
Equalization
EqualizationEqualization
Equalization
 
Gram-Schmidt procedure and constellations
Gram-Schmidt procedure and constellationsGram-Schmidt procedure and constellations
Gram-Schmidt procedure and constellations
 
Factor Analysis - Statistics
Factor Analysis - StatisticsFactor Analysis - Statistics
Factor Analysis - Statistics
 
Signals & systems
Signals & systems Signals & systems
Signals & systems
 

Similar to ML Techniques for Classifying Sports Activities

Tutorial on Coreference Resolution
Tutorial on Coreference Resolution Tutorial on Coreference Resolution
Tutorial on Coreference Resolution Anirudh Jayakumar
 
Lec 6 learning
Lec 6 learningLec 6 learning
Lec 6 learningEyob Sisay
 
Lecture 5 machine learning updated
Lecture 5   machine learning updatedLecture 5   machine learning updated
Lecture 5 machine learning updatedVajira Thambawita
 
Rahul_Kirtoniya_11800121032_CSE_Machine_Learning.pptx
Rahul_Kirtoniya_11800121032_CSE_Machine_Learning.pptxRahul_Kirtoniya_11800121032_CSE_Machine_Learning.pptx
Rahul_Kirtoniya_11800121032_CSE_Machine_Learning.pptxRahulKirtoniya
 
Introduction to Computational Thinking.pptx
Introduction to Computational Thinking.pptxIntroduction to Computational Thinking.pptx
Introduction to Computational Thinking.pptxAyodeleOgegbo
 
Lecture 2 - Introduction to Machine Learning, a lecture in subject module Sta...
Lecture 2 - Introduction to Machine Learning, a lecture in subject module Sta...Lecture 2 - Introduction to Machine Learning, a lecture in subject module Sta...
Lecture 2 - Introduction to Machine Learning, a lecture in subject module Sta...Maninda Edirisooriya
 
Unit 1 - ML - Introduction to Machine Learning.pptx
Unit 1 - ML - Introduction to Machine Learning.pptxUnit 1 - ML - Introduction to Machine Learning.pptx
Unit 1 - ML - Introduction to Machine Learning.pptxjawad184956
 
STAT7440StudentIMLPresentationJishan.pptx
STAT7440StudentIMLPresentationJishan.pptxSTAT7440StudentIMLPresentationJishan.pptx
STAT7440StudentIMLPresentationJishan.pptxJishanAhmed24
 
Introduction to machine learning-2023-IT-AI and DS.pdf
Introduction to machine learning-2023-IT-AI and DS.pdfIntroduction to machine learning-2023-IT-AI and DS.pdf
Introduction to machine learning-2023-IT-AI and DS.pdfSisayNegash4
 
5 learning edited 2012.ppt
5 learning edited 2012.ppt5 learning edited 2012.ppt
5 learning edited 2012.pptHenokGetachew15
 
Lecture 1: What is Machine Learning?
Lecture 1: What is Machine Learning?Lecture 1: What is Machine Learning?
Lecture 1: What is Machine Learning?Marina Santini
 

Similar to ML Techniques for Classifying Sports Activities (20)

ML
MLML
ML
 
Tutorial on Coreference Resolution
Tutorial on Coreference Resolution Tutorial on Coreference Resolution
Tutorial on Coreference Resolution
 
Lec 6 learning
Lec 6 learningLec 6 learning
Lec 6 learning
 
Lecture 5 machine learning updated
Lecture 5   machine learning updatedLecture 5   machine learning updated
Lecture 5 machine learning updated
 
Rahul_Kirtoniya_11800121032_CSE_Machine_Learning.pptx
Rahul_Kirtoniya_11800121032_CSE_Machine_Learning.pptxRahul_Kirtoniya_11800121032_CSE_Machine_Learning.pptx
Rahul_Kirtoniya_11800121032_CSE_Machine_Learning.pptx
 
Learning
LearningLearning
Learning
 
OR
OROR
OR
 
syllabus-CBR.pdf
syllabus-CBR.pdfsyllabus-CBR.pdf
syllabus-CBR.pdf
 
AI Lesson 33
AI Lesson 33AI Lesson 33
AI Lesson 33
 
Lesson 33
Lesson 33Lesson 33
Lesson 33
 
Introduction to Computational Thinking.pptx
Introduction to Computational Thinking.pptxIntroduction to Computational Thinking.pptx
Introduction to Computational Thinking.pptx
 
Lecture 2 - Introduction to Machine Learning, a lecture in subject module Sta...
Lecture 2 - Introduction to Machine Learning, a lecture in subject module Sta...Lecture 2 - Introduction to Machine Learning, a lecture in subject module Sta...
Lecture 2 - Introduction to Machine Learning, a lecture in subject module Sta...
 
Unit 1 - ML - Introduction to Machine Learning.pptx
Unit 1 - ML - Introduction to Machine Learning.pptxUnit 1 - ML - Introduction to Machine Learning.pptx
Unit 1 - ML - Introduction to Machine Learning.pptx
 
4.1.pptx
4.1.pptx4.1.pptx
4.1.pptx
 
STAT7440StudentIMLPresentationJishan.pptx
STAT7440StudentIMLPresentationJishan.pptxSTAT7440StudentIMLPresentationJishan.pptx
STAT7440StudentIMLPresentationJishan.pptx
 
Machine_Learning.pptx
Machine_Learning.pptxMachine_Learning.pptx
Machine_Learning.pptx
 
Introduction to machine learning-2023-IT-AI and DS.pdf
Introduction to machine learning-2023-IT-AI and DS.pdfIntroduction to machine learning-2023-IT-AI and DS.pdf
Introduction to machine learning-2023-IT-AI and DS.pdf
 
5 learning edited 2012.ppt
5 learning edited 2012.ppt5 learning edited 2012.ppt
5 learning edited 2012.ppt
 
Lecture 1: What is Machine Learning?
Lecture 1: What is Machine Learning?Lecture 1: What is Machine Learning?
Lecture 1: What is Machine Learning?
 
Statistical learning intro
Statistical learning introStatistical learning intro
Statistical learning intro
 

Recently uploaded

MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINEMANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINESIVASHANKAR N
 
KubeKraft presentation @CloudNativeHooghly
KubeKraft presentation @CloudNativeHooghlyKubeKraft presentation @CloudNativeHooghly
KubeKraft presentation @CloudNativeHooghlysanyuktamishra911
 
UNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its PerformanceUNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its Performancesivaprakash250
 
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Serviceranjana rawat
 
Porous Ceramics seminar and technical writing
Porous Ceramics seminar and technical writingPorous Ceramics seminar and technical writing
Porous Ceramics seminar and technical writingrakeshbaidya232001
 
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...Dr.Costas Sachpazis
 
Introduction to Multiple Access Protocol.pptx
Introduction to Multiple Access Protocol.pptxIntroduction to Multiple Access Protocol.pptx
Introduction to Multiple Access Protocol.pptxupamatechverse
 
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...ranjana rawat
 
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...ranjana rawat
 
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...ranjana rawat
 
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICSAPPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICSKurinjimalarL3
 
Coefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptxCoefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptxAsutosh Ranjan
 
result management system report for college project
result management system report for college projectresult management system report for college project
result management system report for college projectTonystark477637
 
Processing & Properties of Floor and Wall Tiles.pptx
Processing & Properties of Floor and Wall Tiles.pptxProcessing & Properties of Floor and Wall Tiles.pptx
Processing & Properties of Floor and Wall Tiles.pptxpranjaldaimarysona
 
AKTU Computer Networks notes --- Unit 3.pdf
AKTU Computer Networks notes ---  Unit 3.pdfAKTU Computer Networks notes ---  Unit 3.pdf
AKTU Computer Networks notes --- Unit 3.pdfankushspencer015
 
Introduction and different types of Ethernet.pptx
Introduction and different types of Ethernet.pptxIntroduction and different types of Ethernet.pptx
Introduction and different types of Ethernet.pptxupamatechverse
 
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...Christo Ananth
 
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...roncy bisnoi
 

Recently uploaded (20)

MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINEMANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
MANUFACTURING PROCESS-II UNIT-2 LATHE MACHINE
 
KubeKraft presentation @CloudNativeHooghly
KubeKraft presentation @CloudNativeHooghlyKubeKraft presentation @CloudNativeHooghly
KubeKraft presentation @CloudNativeHooghly
 
UNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its PerformanceUNIT - IV - Air Compressors and its Performance
UNIT - IV - Air Compressors and its Performance
 
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service
(RIA) Call Girls Bhosari ( 7001035870 ) HI-Fi Pune Escorts Service
 
DJARUM4D - SLOT GACOR ONLINE | SLOT DEMO ONLINE
DJARUM4D - SLOT GACOR ONLINE | SLOT DEMO ONLINEDJARUM4D - SLOT GACOR ONLINE | SLOT DEMO ONLINE
DJARUM4D - SLOT GACOR ONLINE | SLOT DEMO ONLINE
 
Porous Ceramics seminar and technical writing
Porous Ceramics seminar and technical writingPorous Ceramics seminar and technical writing
Porous Ceramics seminar and technical writing
 
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
Sheet Pile Wall Design and Construction: A Practical Guide for Civil Engineer...
 
Introduction to Multiple Access Protocol.pptx
Introduction to Multiple Access Protocol.pptxIntroduction to Multiple Access Protocol.pptx
Introduction to Multiple Access Protocol.pptx
 
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
 
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(ANJALI) Dange Chowk Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
 
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
(PRIYA) Rajgurunagar Call Girls Just Call 7001035870 [ Cash on Delivery ] Pun...
 
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICSAPPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
 
Coefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptxCoefficient of Thermal Expansion and their Importance.pptx
Coefficient of Thermal Expansion and their Importance.pptx
 
result management system report for college project
result management system report for college projectresult management system report for college project
result management system report for college project
 
Processing & Properties of Floor and Wall Tiles.pptx
Processing & Properties of Floor and Wall Tiles.pptxProcessing & Properties of Floor and Wall Tiles.pptx
Processing & Properties of Floor and Wall Tiles.pptx
 
AKTU Computer Networks notes --- Unit 3.pdf
AKTU Computer Networks notes ---  Unit 3.pdfAKTU Computer Networks notes ---  Unit 3.pdf
AKTU Computer Networks notes --- Unit 3.pdf
 
Introduction and different types of Ethernet.pptx
Introduction and different types of Ethernet.pptxIntroduction and different types of Ethernet.pptx
Introduction and different types of Ethernet.pptx
 
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
Call for Papers - Educational Administration: Theory and Practice, E-ISSN: 21...
 
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
 
Roadmap to Membership of RICS - Pathways and Routes
Roadmap to Membership of RICS - Pathways and RoutesRoadmap to Membership of RICS - Pathways and Routes
Roadmap to Membership of RICS - Pathways and Routes
 

ML Techniques for Classifying Sports Activities

  • 1. Machine Learning Techniques M. Lilly Florence Adhiyamaan College of Engineering Hosur
  • 2. Content • Learning • Types of Machine Learning • Supervised Learning • The Brain and the Neuron • Design a Learning System • Perspectives and Issues in Machine Learning • Concept Learning as Task • Concept Learning as Search • Finding a Maximally Specific Hypothesis • Version Spaces and the Candidate Elimination Algorithm • Linear Discriminants • Perceptron • Linear Separability • Linear Regression
  • 3. Learning • It is said that the term machine learning was first coined by Arthur Lee Samuel, a pioneer in the AI field, in 1959. • “Machine learning is the field of study that gives computers the ability to learn without being explicitly programmed. — Arthur L. Samuel, AI pioneer, 1959”. • A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E. — Tom Mitchell, Machine Learning Professor at Carnegie Mellon University • To illustrate this quote with an example, consider the problem of recognizing handwritten digits: • Task T: classifying handwritten digits from images • Performance measure P : percentage of digits classified correctly • Training experience E: dataset of digits given classifications,
  • 4. Why “Learn” ? 4 • Machine learning is programming computers to optimize a performance criterion using example data or past experience. • There is no need to “learn” to calculate payroll • Learning is used when: • Human expertise does not exist (navigating on Mars), • Humans are unable to explain their expertise (speech recognition) • Solution changes in time (routing on a computer network) • Solution needs to be adapted to particular cases (user biometrics)
  • 5. Basic components of learning process • Four components, namely, data storage, abstraction, generalization and evaluation. • 1. Data storage - Facilities for storing and retrieving huge amounts of data are an important component of the learning process • 2. Abstraction - Abstraction is the process of extracting knowledge about stored data. This involves creating general concepts about the data as a whole. The creation of knowledge involves application of known models and creation of new models. The process of fitting a model to a dataset is known as training. When the model has been trained, the data is transformed into an abstract form that summarizes the original information. • 3. Generalization - The term generalization describes the process of turning the knowledge about stored data into a form that can be utilized for future action. • 4. Evaluation - It is the process of giving feedback to the user to measure the utility of the learn
  • 6. Learning Model • The basic idea of Learning models has divided into three categories. • Using a Logical expression. (Logical models) • Using the Geometry of the instance space. (Geometric models) • Using Probability to classify the instance space. (Probabilistic models
  • 7. Applications of Machine Learning • Email spam detection • Face detection and matching (e.g., iPhone X) • Web search (e.g., DuckDuckGo, Bing, Google) • Sports predictions • Post office (e.g., sorting letters by zip codes) • ATMs (e.g., reading checks) • Credit card fraud • Stock predictions • Smart assistants (Apple Siri, Amazon Alexa, . . . ) • Product recommendations (e.g., Netflix, Amazon) • Self-driving cars (e.g., Uber, Tesla) • Language translation (Google translate) • Sentiment analysis • Drug design • Medical diagnose
  • 8. Types of Machine Learning • The three broad categories of machine learning are summarized in the following figure: • Supervised learning • Unsupervised learning and • Reinforcement learning • Evolutionary learning
  • 9. Types of Machine Learning
  • 10. Supervised learning • Supervised learning is the subcategory of machine learning that focuses on learning a classification or regression model, that is, learning from labeled training data. • Classification • Regression
  • 11.
  • 12. The Brain and the Neuron • Brain • Nerve Cell-Neuron • Each neuron is typically connected to thousands of other neurons, so that it is estimated that there are about 100 trillion (= 1014) synapses within the brain. After firing, the neuron must wait for some time to recover its energy (the refractory period) before it can fire again • Hebb’s Rule - rule says that the changes in the strength of synaptic connections are proportional to the correlation in the firing of the two connecting neurons. So if two neurons consistently fire simultaneously, then any connection between them will change in strength, becoming stronger. • There are other names for this idea that synaptic connections between neurons and assemblies of neurons can be formed when they fire together and can become stronger. It is also known as long-term potentiation and neural plasticity, and it does appear to have correlates in real brains.
  • 13. The Brain and the Neuron • McCulloch and Pitts Neurons • Studying neurons isn’t actually that easy, able to extract the neuron from the brain, and then keep it alive so that you can see how it reacts in controlled circumstances.
  • 14. Designing a Learning System • The design choices has the following key components: 1. Type of training experience – Direct/Indirect, Supervised/Unsupervised 2. Choosing the Target Function 3. Choosing a representation for the Target Function 4. Choosing an approximation algorithm for the Target Function 5. The final Design
  • 15. Designing a Learning System Real-world examples of machine learning problems include “Is this cancer?”, “What is the market value of this house?”, “Which of these people are good friends with each other?”, “Will this rocket engine explode on take off?”, “Will this person like this movie?”, “Who is this?”, “What did you say?”, and “How do you fly this thing?” All of these problems are excellent targets for an ML project; in fact ML has been applied to each of them with great success.
  • 16.
  • 17. PERSPECTIVES AND ISSUES IN MACHINE LEARNING Issues in Machine Learning • What algorithms exist for learning general target functions from specific training examples? In what settings will particular algorithms converge to the desired function, given sufficient training data? Which algorithms perform best for which types of problems and representations? • How much training data is sufficient? What general bounds can be found to relate the confidence in learned hypotheses to the amount of training experience and the character of the learner's hypothesis space? • When and how can prior knowledge held by the learner guide the process of generalizing from examples? Can prior knowledge be helpful even when it is only approximately correct? • What is the best strategy for choosing a useful next training experience, and how does the choice of this strategy alter the complexity of the learning problem? • What is the best way to reduce the learning task to one or more function approximation problems? Put another way, what specific functions should the system attempt to learn? Can this process itself be automated? • How can the learner automatically alter its representation to improve its ability to represent and learn the target function?
  • 19. Concept Learning as Search • The goal of this search is to find the hypothesis that best fits the training examples. • By selecting a hypothesis representation, the designer of the learning algorithm implicitly defines the space of all hypotheses that the program can ever represent and therefore can ever learn. • Consider, for example,the instances X and hypotheses H in the EnjoySport learning task. In learning as a search problem, it is natural that our study of learning algorithms will examine the different strategies for searching the hypothesis space.
  • 20. Concept Learning as Search • General-to-Specific Ordering of Hypotheses • To illustrate the general-to-specific ordering, consider the two hypotheses • h1 = (Sunny, ?, ?, Strong, ?, ?) • h2=(Sunny,?,?,?,?,?) • Now consider the sets of instances that are classified positive by hl and by h2. Because h2 imposes fewer constraints on the instance, it classifies more instances as positive. In fact, any instance classified positive by h1 will also be classified positive by h2. Therefore, we say that h2 is more general than h1. • First, for any instance x in X and hypothesis h in H, we say that x satisfies h if and only if h(x) = 1
  • 21. Concept Learning as Search •
  • 22. Finding a Maximally Specific Hypothesis Three main concepts; • Concept Learning • General Hypothesis • Specific Hypothesis • A hypothesis, h, is a most specific hypothesis if it covers none of the negative examples and there is no other hypothesis h′ that covers no negative examples, such that h is strictly more general than h′.
  • 23. Finding a Maximally Specific Hypothesis • Find-S algorithm finds the most specific hypothesis that fits all the positive examples. • Find-S algorithm moves from the most specific hypothesis to the most general hypothesis. Important Representation : • ? indicates that any value is acceptable for the attribute. • specify a single required value ( e.g., Cold ) for the attribute. • ϕ indicates that no value is acceptable. • The most general hypothesis is represented by: {?, ?, ?, ?, ?, ?} • The most specific hypothesis is represented by : {ϕ, ϕ, ϕ, ϕ, ϕ, ϕ} •
  • 24. Find-S Algorithm Steps Involved In Find-S : • Start with the most specific hypothesis. • h = {ϕ, ϕ, ϕ, ϕ, ϕ, ϕ} • Take the next example and if it is negative, then no changes occur to the hypothesis. • If the example is positive and we find that our initial hypothesis is too specific then we update our current hypothesis to general condition. • Keep repeating the above steps till all the training examples are complete. • After we have completed all the training examples we will have the final hypothesis which can used to classify the new examples.
  • 25.
  • 26. • First we consider the hypothesis to be more specific hypothesis. Hence, our hypothesis would be : h = {ϕ, ϕ, ϕ, ϕ, ϕ, ϕ} • Consider example 1 : • The data in example 1 is { GREEN, HARD, NO, WRINKLED }. We see that our initial hypothesis is more specific and we have to generalize it for this example. Hence, the hypothesis becomes : h = { GREEN, HARD, NO, WRINKLED } • Consider example 2 : Here we see that this example has a negative outcome. Hence we neglect this example and our hypothesis remains the same. h = { GREEN, HARD, NO, WRINKLED } •
  • 27. • Consider example 3 : Here we see that this example has a negative outcome. Hence we neglect this example and our hypothesis remains the same. h = { GREEN, HARD, NO, WRINKLED } • Consider example 4 : The data present in example 4 is { ORANGE, HARD, NO, WRINKLED }. We compare every single attribute with the initial data and if any mismatch is found we replace that particular attribute with general case ( ” ? ” ). After doing the process the hypothesis becomes : h = { ?, HARD, NO, WRINKLED } • Consider example 5 : The data present in example 5 is { GREEN, SOFT, YES, SMOOTH }. We compare every single attribute with the initial data and if any mismatch is found we replace that particular attribute with general case ( ” ? ” ). After doing the process the hypothesis becomes : h = { ?, ?, ?, ? } Since we have reached a point where all the attributes in our hypothesis have the general condition, the example 6 and example 7 would result in the same hypothesizes with all general attributes. h = { ?, ?, ?, ? } • Hence, for the given data the final hypothesis would be : Final Hyposthesis: h = { ?, ?, ?, ? }
  • 28. Version Space • A version space is a hierarchical representation of knowledge that enables you to keep track of all the useful information supplied by a sequence of learning examples without remembering any of the examples. • The version space method is a concept learning process accomplished by managing multiple models within a version space. • Definition (Version space). A concept is complete if it covers all positive examples. • A concept is consistent if it covers none of the negative examples. The version space is the set of all complete and consistent concepts. This set is convex and is fully defined by its least and most general elements.
  • 29. Version Space To represent the version space is simply to list all of its members. This leads to a simple learning algorithm, which we might call the LIST-THEN ELIMINATE algorithm • The LIST-THEN-ELIMINATE algorithm first initializes the version space to contain all hypotheses in H, then eliminates any hypothesis found inconsistent with any training example. • The version space of candidate hypotheses thus shrinks as more examples are observed, until ideally just one hypothesis remains that is consistent with all the observed examples.
  • 32.
  • 33. Origin Manufacturer Color Decade Type Example Type Japan Honda Blue 1980 Economy Positive Japan Toyota Green 1970 Sports Negative Japan Toyota Blue 1990 Economy Positive USA Chrysler Red 1980 Economy Negative Japan Honda White 1980 Economy Positive Problem 1: Learning the concept of "Japanese Economy Car" Features: ( Country of Origin, Manufacturer, Color, Decade, Type )
  • 34. • Solution: • 1. Positive Example: (Japan, Honda, Blue, 1980, Economy) • Initialize G to a singleton set that includes everything. G = { (?, ?, ?, ?, ?) } • Initialize S to a singleton set that includes the first positive example. S = { (Japan, Honda, Blue, 1980, Economy) }
  • 35. Linear Discriminant Analysis • In 1936, Ronald A.Fisher formulated Linear Discriminant first time and showed some practical uses as a classifier, it was described for a 2-class problem, and later generalized as ‘Multi-class Linear Discriminant Analysis’ or ‘Multiple Discriminant Analysis’ by C.R.Rao in the year 1948. • Linear Discriminant Analysis is the most commonly used dimensionality reduction technique in supervised learning. Basically, it is a preprocessing step for pattern classification and machine learning applications. • It projects the dataset into moderate dimensional-space with a genuine class of separable features that minimize overfitting and computational costs.
  • 36.
  • 37. Working of Linear Discriminant Analysis - Assumptions • Every feature either be variable, dimension, or attribute in the dataset has gaussian distribution, i.e, features have a bell-shaped curve. • Each feature holds the same variance, and has varying values around the mean with the same amount on average. • Each feature is assumed to be sampled randomly. • Lack of multicollinearity in independent features and there is an increment in correlations between independent features and the power of prediction decreases.
  • 38. LDA achieve this via three step process; • First step: To compute the separate ability amid various classes,i.e, the distance between the mean of different classes, that is also known as between-class variance
  • 39. • Second Step: To compute the distance among the mean and sample of each class, that is also known as the within class variance.
  • 40. • Third step: To create the lower dimensional space that maximizes the between class variance and minimizes the within class variance. • Assuming P as the lower dimensional space projection that is known as Fisher’s criterion.
  • 41. • For example, LDA can be used as a classification task for speech recognition, microarray data classification, face recognition, image retrieval, bioinformatics, biometrics, chemistry, etc. • https://people.revoledu.com/kardi/tutorial/LDA/Numerical%20Exam ple.html
  • 42. Perceptron • Perceptron is a single layer neural network and a multi-layer perceptron is called Neural Networks. • Perceptron is a linear classifier (binary). Also, it is used in supervised learning.
  • 43. The perceptron consists of 4 parts. • Input values or One input layer • Weights and Bias • Net sum • Activation Function
  • 44.
  • 45. • The perceptron works on these simple steps a. All the inputs x are multiplied with their weights w. Let’s call it k.
  • 46. b. Add all the multiplied values and call them Weighted Sum.
  • 47. C. Apply that weighted sum to the correct Activation Function.
  • 48. Why do we need Weights and Bias? • Weights shows the strength of the particular node. • A bias value allows you to shift the activation function curve up or down.
  • 49. Why do we need Activation Function? • In short, the activation functions are used to map the input between the required values like (0, 1) or (-1, 1).