1. LEARNING FROMEXAMPLES
Learning from examples is a method in AI where the machine learns by studying example data
instead of being programmed manually.
Key Idea
The AI system observes many example inputs and their correct outputs, then learns a pattern
that connects them.
Simple Example
If you show a machine:
Picture Label
Cat image Cat
Dog image Dog
The system learns what features define a cat or dog.
3.
WHERE IT ISUSED?
Face recognition (learns from example faces)
Spam email detection (learns from example emails)
Voice assistants (learns from example voice commands)
4.
SUPERVISED LEARNING
Supervised learningis one of the most important types of machine learning.
It means the machine learns from labeled examples — exactly like a student learning
with a teacher’s help.
What Does “Supervised” Mean?
“Supervised” means that the model is trained using input data along with the correct
answers (labels).
A human gives the machine:
• Example input
• Correct output
So the machine learns what the relationship is between them.
5.
STEPS OF SUPERVISEDLEARNING
Step 1: Collect Labeled Data
Data must have both:
Features (input)
Labels (output)
Example dataset:
Hours Studied Marks Scored
2 50
5 80
7 90
6.
Step 2: SplitData
Two parts:
Training Set (70–80%) used to teach the model
→
Testing Set (20–30%) used to check accuracy
→
Step 3: Train the Model
The model learns the relationship between input and output.
Example:
More hours studied more marks
→
Machine learns this trend.
Step 4: Model Testing
Model predicts output for new input.
Example:
If a student studies 6 hours model predicts maybe 85 marks.
→
7.
Step 5: Evaluation
Wecheck:
How accurate is the model?
How close are predictions to actual results?
Step 6: Use the Model
Now the model can predict real-life outcomes.
8.
TYPES OF SUPERVISEDLEARNING
A) Classification (output = category)
Example tasks:
Email Spam / Not Spam
→
Image Cat / Dog
→
Credit card Fraud / Not Fraud
→
Output is a label.
B) Regression (output = number)
Example tasks:
Predict house price
Predict temperature
Predict marks
Output is a continuous value.
9.
EXAMPLES OF SUPERVISEDLEARNING
✔️1. Gmail Spam Detection
Input Email text
→
Output Spam / Not Spam
→
Used in classification.
✔️2. Credit Card Fraud Detection
Input Transaction data
→
Output Fraud / Not Fraud
→
✔️3. YouTube Recommendations
Input User history
→
Output Video prediction
→
10.
Advantages
1. Accurate andReliable
Since training is done with correct answers,
model learns very well.
2. Easy to Understand
Input Output mapping is simple.
→
3. Useful for Many Real-World
Applications
Spam filtering, medical diagnosis, speech
recognition, etc.
4. Works with Large Datasets
The bigger the dataset, the smarter the
model.
Disadvantages
1. Needs Labeled Data
Labeling data is expensive and time-
consuming.
2. Overfitting Risk
Model may “memorize” instead of actually
learning.
3. Cannot Discover Hidden Patterns on Its
Own
Only learns what it is taught.
11.
DECISION TREES (SUPERVISEDLEARNING ALGORITHM)
A decision tree is a simple and powerful AI model that makes decisions using a tree-like
flowchart.
It works by asking a series of yes/no or simple questions until it reaches an answer.
✔️Example (Simple)
Suppose we want to decide: Will a person play sports today?
A decision tree might ask:
12.
HOW A DECISIONTREE WORKS
1.Training Data
We give it examples like:
Weather Temperature Play?
Sunny Mild Yes
Rainy Cold No
Cloudy Warm Yes
The model studies these labeled examples.
13.
2.Find Best Attributesto Split
The algorithm chooses the best question to divide the data.
Example:
“Is it raining?” may separate the data most effectively.
3.Create Branches
Each answer (“yes”, “no”) creates a branch.
4.Repeat Splitting
It keeps breaking the data into smaller groups
until each group contains similar outputs.
5.Form a Tree
The final model looks like a flowchart:
Root Question 1
→
Branches Next questions
→
Leaf Final decision/output
→
14.
Advantages
Easy tounderstand and
visualize
Works for classification and
regression
No need for data
normalization
Can handle numerical +
categorical data
Disadvantages
Can overfit if tree becomes
too deep
Sensitive to small changes in
data
May produce complex trees