INTRODUCTION TO MACHINE
INTRODUCTION TO MACHINE
LEARNING & ITS APPLICATIONS
LEARNING & ITS APPLICATIONS
Dr. R. Sudarmani,
Dr. R. Sudarmani,
Professor/ECE,
Professor/ECE,
School of Engineering,
School of Engineering,
Avinashilingam Institute for Home
Avinashilingam Institute for Home
Science and Higher Education for
Science and Higher Education for
Women, Coimbatore.
Women, Coimbatore.
 To understand fundamental of Machine
learning terms and algorithms
 To apply these techniques to develop
solutions for problems related to
Engineering applications
What is Machine Learning?
 “… said to learn from experience with
respect to some class of tasks, and a
performance measure P, if [the learner’s]
performance at tasks in the class as
measured by , improves with experience.”
Tom Michelle 1997.
 Field of study that gives "computers
the ability to learn without being explicitly
programmed“
Arthur Samuel, 1959
ML Paradigms
• Supervised Learning
• Learn an input and output map
• Classification: categorical output
• Regression: continuous output
• Unsupervised Learning
• Discover patterns in the data
• Clustering: cohesive grouping
• Association: frequent cooccurrence
• Reinforcement Learning
• Learning control
Challenges
• How good is a model?
• How do I choose a model?
• Do I have enough data?
• Is the data of sufficient quality?
• Errors in data, Ex: Age=225; noise in low resolution images
• Missing values
• How confident can I be of the results?
• Am I describing the data correctly?
• Are age and income enough? Should I include gender also?
• How should I represent age? As a number, or as young,
middle age or old?
Traditional Programming vs Machine
Learning
Hyper parameter tuning –no. of layers, maximum depth allowed for
decision tree, learning rate etc.,
ML in Real Life Scenario
CHALLENGES IN SUPERVISED
LEARNING
•Inaccurate results due to irrelevant input
feature.
•Suffers from accuracy.
•This approach will be brute force when
concerned experts are not available.
Merits & Demerits Of Supervised
Learning
Advantages
• Produce output from
previous experience.
• Optimize performance.
• Solves various types of
real-world
computational
problems.
Disadvantages
• Decision boundary
must be over trained.
• Many examples are
needed to train the
classifier.
• Big data is real
challenge.
• Computational time
Dimensionality Reduction
Why Unsupervised Learning?
•Finds unknown pattern in data.
•Finds features useful for categorization.
•Takes place in real time.
•Easy to get unlabelled data from
computer.
Disadvantages of Unsupervised
Learning
• No precise information regarding data sorting.
• Less accuracy
• Donot correspond to informational classes.
• More time consumption.
• Spectral properties can change over time.
Overview of Algorithms in Supervised
and Unsupervised Learning
Few Machine Learning
Algorithms
NAIVE BAYES CLASSIFIER
K MEANS CLUSTERING
ALGORITHM
SUPPORT VECTOR MACHINE
LINEAR REGRESSION
RANDOM FOREST
• Random Forest is an ensemble learning
method that combines multiple decision
trees to improve accuracy and prevent
overfitting, commonly used for
classification and regression tasks.
• How it works:
• Ensemble of Decision Trees: Creates multiple
decision trees during training; each tree is
trained on a random subset of the data and
features.
• Voting/Averaging: For classification, each
tree votes on the class, and the majority vote
is chosen. For regression, predictions are
averaged across trees.
• Key Concepts:
• Bootstrap Aggregation (Bagging): Randomly
samples data with replacement to create
diverse training subsets for each tree.
• Feature Randomness: Randomly selects a
subset of features for each split in a tree,
reducing correlation between trees and
improving generalization.
LOGISTIC REGRESSION
ARTIFICIAL NEURAL NETWORK
• Artificial Neural Networks (ANN) are
computational models inspired by the
human brain, composed of interconnected
layers of nodes (neurons) used for complex
tasks in classification, regression, and more.
• Structure:
• Layers: Typically organized into three types:
• Input Layer: Receives the features from the
dataset.
• Hidden Layers: Layers between input and output
layers; perform computations on inputs using
activation functions.
• Output Layer: Provides the final prediction or
classification.
• Neurons and Weights: Each neuron receives
weighted inputs, sums them, and applies an
activation function.
• Key Concepts:
• Activation Functions: Non-linear functions
(e.g., ReLU, Sigmoid, Tanh) that introduce
non-linearity, allowing networks to learn
complex patterns.
• Forward and Backpropagation: Forward pass
calculates the output, and backpropagation
adjusts weights to minimize the error using
gradient descent.
K NEAREST NEIGHBOR
• K-Nearest Neighbors is a simple, instance-
based learning algorithm used for classification
and regression, which makes predictions based
on the closest data points in the feature space.
• How It Works:
• Distance Calculation: Computes the distance between
the query point and all points in the training set
(commonly using Euclidean distance).
• K Neighbors: Identifies the closest data points
𝐾
(neighbors) to the query point.
• Voting/Averaging:
• Classification: Assigns the class most common among the 𝐾
neighbors.
• Regression: Predicts the average of the values of the neighbors.
𝐾
• Key Concepts:
• Choice of : A small may capture noise, while a
𝐾 𝐾
large can smooth out details, so is typically
𝐾 𝐾
optimized using cross-validation.
• Distance Metrics: Common choices include Euclidean,
Manhattan, and Minkowski distances.
Machine Learning - Reinforcement
Learning
FEW MORE
APPLICATIONS……..
Upcoming Applications of Machine
Learning
• AI Chips –Playing high end games
• IoT and AI –Capturing data from cars using sensors and the
collected data to decide on an insurance amount
• Automated ML –Applying ML to ML –Solving repeated
tasks through automation
• Personalized Medicines
• ML based assistants like Alexa
• ML based industrial equipment and machines -sensors are
used with these machines and the data collected are fed to
ML models, we can get better performance and more
efficient servicing schedules.
• Surveillance
• Social credit systems
Intelligence - Traditional vs
ML.
How you would write a spam
filter?
Intelligence - Spam Filter - Traditional
Approach
2
1
3
Intelligence - Spam Filter - Traditional
Approach
Problems?
●Problem is not trivial
○ Program will likely become a long list of complex rules
○ Pretty hard to maintain
●If spammers notice that
○ All their emails containing “4U” are blocked
○ They might start writing “For U” instead
○ If spammers keep working around spam filter, we will need to
keep writing new rules forever
Intelligence - Spam Filter - ML
Approach
Intelligence - Spam Filter - ML
Approach
● A spam filter based on Machine Learning techniques
automatically learns
○ Which words and phrases are good predictors of spam
○ By detecting unusually frequent patterns of words
● The program will be
○ Much shorter
○ Easier to maintain
○ Most likely more accurate than traditional approach
Intelligence - Spam Filter - ML
Approach
● Unlike traditional approach, ML techniques automatically notice
that
○ “For U” has become unusually frequent in spam flagged by
users and
○ It starts flagging them without our intervention
Intelligence - Spam Filter - ML
Approach
Can help humans learn
●ML algorithms can be inspected to see what they have learned
●Spam filter after enough training
○ Reveals combinations of words that it believes are best
predictors of spam
○ May reveal unsuspected correlations or new trend and
○ Lead to a better understanding of the problem for humans
Intelligence - Spam Filter - ML
Approach
Can help humans
learn
Machine Learning
Weather Forecasting
Self driving cars on the roads
Netflix movies recommendations
Amazon product
recommendations
Accurate results in Google Search
Machine Learning In Election Polls
Fraud detection using machine
learning
Traffic management using machine
learning
Overview Of Machine Learning &
Applications
Machine Learning Libraries
THANK YOU

Machine Learning and its Appplications--

  • 1.
    INTRODUCTION TO MACHINE INTRODUCTIONTO MACHINE LEARNING & ITS APPLICATIONS LEARNING & ITS APPLICATIONS Dr. R. Sudarmani, Dr. R. Sudarmani, Professor/ECE, Professor/ECE, School of Engineering, School of Engineering, Avinashilingam Institute for Home Avinashilingam Institute for Home Science and Higher Education for Science and Higher Education for Women, Coimbatore. Women, Coimbatore.
  • 2.
     To understandfundamental of Machine learning terms and algorithms  To apply these techniques to develop solutions for problems related to Engineering applications
  • 3.
    What is MachineLearning?  “… said to learn from experience with respect to some class of tasks, and a performance measure P, if [the learner’s] performance at tasks in the class as measured by , improves with experience.” Tom Michelle 1997.  Field of study that gives "computers the ability to learn without being explicitly programmed“ Arthur Samuel, 1959
  • 5.
    ML Paradigms • SupervisedLearning • Learn an input and output map • Classification: categorical output • Regression: continuous output • Unsupervised Learning • Discover patterns in the data • Clustering: cohesive grouping • Association: frequent cooccurrence • Reinforcement Learning • Learning control
  • 6.
    Challenges • How goodis a model? • How do I choose a model? • Do I have enough data? • Is the data of sufficient quality? • Errors in data, Ex: Age=225; noise in low resolution images • Missing values • How confident can I be of the results? • Am I describing the data correctly? • Are age and income enough? Should I include gender also? • How should I represent age? As a number, or as young, middle age or old?
  • 9.
  • 10.
    Hyper parameter tuning–no. of layers, maximum depth allowed for decision tree, learning rate etc.,
  • 11.
    ML in RealLife Scenario
  • 18.
    CHALLENGES IN SUPERVISED LEARNING •Inaccurateresults due to irrelevant input feature. •Suffers from accuracy. •This approach will be brute force when concerned experts are not available.
  • 19.
    Merits & DemeritsOf Supervised Learning Advantages • Produce output from previous experience. • Optimize performance. • Solves various types of real-world computational problems. Disadvantages • Decision boundary must be over trained. • Many examples are needed to train the classifier. • Big data is real challenge. • Computational time
  • 25.
  • 27.
    Why Unsupervised Learning? •Findsunknown pattern in data. •Finds features useful for categorization. •Takes place in real time. •Easy to get unlabelled data from computer.
  • 28.
    Disadvantages of Unsupervised Learning •No precise information regarding data sorting. • Less accuracy • Donot correspond to informational classes. • More time consumption. • Spectral properties can change over time.
  • 29.
    Overview of Algorithmsin Supervised and Unsupervised Learning
  • 30.
  • 32.
  • 33.
  • 34.
  • 35.
  • 36.
    RANDOM FOREST • RandomForest is an ensemble learning method that combines multiple decision trees to improve accuracy and prevent overfitting, commonly used for classification and regression tasks. • How it works: • Ensemble of Decision Trees: Creates multiple decision trees during training; each tree is trained on a random subset of the data and features. • Voting/Averaging: For classification, each tree votes on the class, and the majority vote is chosen. For regression, predictions are averaged across trees. • Key Concepts: • Bootstrap Aggregation (Bagging): Randomly samples data with replacement to create diverse training subsets for each tree. • Feature Randomness: Randomly selects a subset of features for each split in a tree, reducing correlation between trees and improving generalization.
  • 37.
  • 38.
    ARTIFICIAL NEURAL NETWORK •Artificial Neural Networks (ANN) are computational models inspired by the human brain, composed of interconnected layers of nodes (neurons) used for complex tasks in classification, regression, and more. • Structure: • Layers: Typically organized into three types: • Input Layer: Receives the features from the dataset. • Hidden Layers: Layers between input and output layers; perform computations on inputs using activation functions. • Output Layer: Provides the final prediction or classification. • Neurons and Weights: Each neuron receives weighted inputs, sums them, and applies an activation function. • Key Concepts: • Activation Functions: Non-linear functions (e.g., ReLU, Sigmoid, Tanh) that introduce non-linearity, allowing networks to learn complex patterns. • Forward and Backpropagation: Forward pass calculates the output, and backpropagation adjusts weights to minimize the error using gradient descent.
  • 39.
    K NEAREST NEIGHBOR •K-Nearest Neighbors is a simple, instance- based learning algorithm used for classification and regression, which makes predictions based on the closest data points in the feature space. • How It Works: • Distance Calculation: Computes the distance between the query point and all points in the training set (commonly using Euclidean distance). • K Neighbors: Identifies the closest data points 𝐾 (neighbors) to the query point. • Voting/Averaging: • Classification: Assigns the class most common among the 𝐾 neighbors. • Regression: Predicts the average of the values of the neighbors. 𝐾 • Key Concepts: • Choice of : A small may capture noise, while a 𝐾 𝐾 large can smooth out details, so is typically 𝐾 𝐾 optimized using cross-validation. • Distance Metrics: Common choices include Euclidean, Manhattan, and Minkowski distances.
  • 44.
    Machine Learning -Reinforcement Learning
  • 47.
  • 48.
    Upcoming Applications ofMachine Learning • AI Chips –Playing high end games • IoT and AI –Capturing data from cars using sensors and the collected data to decide on an insurance amount • Automated ML –Applying ML to ML –Solving repeated tasks through automation • Personalized Medicines • ML based assistants like Alexa • ML based industrial equipment and machines -sensors are used with these machines and the data collected are fed to ML models, we can get better performance and more efficient servicing schedules. • Surveillance • Social credit systems
  • 49.
    Intelligence - Traditionalvs ML. How you would write a spam filter?
  • 50.
    Intelligence - SpamFilter - Traditional Approach 2 1 3
  • 51.
    Intelligence - SpamFilter - Traditional Approach Problems? ●Problem is not trivial ○ Program will likely become a long list of complex rules ○ Pretty hard to maintain ●If spammers notice that ○ All their emails containing “4U” are blocked ○ They might start writing “For U” instead ○ If spammers keep working around spam filter, we will need to keep writing new rules forever
  • 52.
    Intelligence - SpamFilter - ML Approach
  • 53.
    Intelligence - SpamFilter - ML Approach ● A spam filter based on Machine Learning techniques automatically learns ○ Which words and phrases are good predictors of spam ○ By detecting unusually frequent patterns of words ● The program will be ○ Much shorter ○ Easier to maintain ○ Most likely more accurate than traditional approach
  • 54.
    Intelligence - SpamFilter - ML Approach ● Unlike traditional approach, ML techniques automatically notice that ○ “For U” has become unusually frequent in spam flagged by users and ○ It starts flagging them without our intervention
  • 55.
    Intelligence - SpamFilter - ML Approach Can help humans learn ●ML algorithms can be inspected to see what they have learned ●Spam filter after enough training ○ Reveals combinations of words that it believes are best predictors of spam ○ May reveal unsuspected correlations or new trend and ○ Lead to a better understanding of the problem for humans
  • 56.
    Intelligence - SpamFilter - ML Approach Can help humans learn Machine Learning
  • 57.
  • 60.
    Self driving carson the roads
  • 61.
  • 62.
  • 63.
    Accurate results inGoogle Search
  • 64.
    Machine Learning InElection Polls
  • 65.
    Fraud detection usingmachine learning
  • 66.
    Traffic management usingmachine learning
  • 67.
    Overview Of MachineLearning & Applications
  • 68.
  • 71.