This document is an introduction to a machine learning lecture that covers supervised learning and linear regression. The lecture aims to help students understand:
- The supervised learning setting where the data consists of input-output pairs
- Linear regression and how to apply linear regression models to make predictions
- How to derive the least squares estimate for linear regression
The document outlines that linear models can approximate many real processes and are often used as modules in larger systems. It also provides examples of regression problems and demonstrates linear prediction on sample data.
Biopesticide (2).pptx .This slides helps to know the different types of biop...
ML LEC 01 Intro to Supervised Learning
1. Machine Learning
LEC 01: Welcome
Machine Learning
L E C 0 1
TA1
Mohammad Akbari
Instructor
TAs
00-00
Time
Welcome!
2. Machine Learning
LEC 01: Welcome
Outline
• This lecture introduces us to the topic of supervised learning. Here
the data consists of input-output pairs. Inputs are also often referred
to as covariates, predictors and features; while outputs are known as
variates, targets and labels. The goal of the lecture is for youto:
- Understand the supervised learning setting.
- Understand linear regression (aka least squares)
- Understand how to apply linear regression models to make predictions.
- Learn to derive the least squares estimate.
3. Machine Learning
LEC 01: Welcome
Linear supervised learning
• Many real processes can be approximated with linear models.
• Linear regression often appears as a module of larger systems.
• Linear problems can be solved analytically.
• Linear prediction provides an introduction to many of the core
concepts of machine learning.
4. Machine Learning
LEC 01: Welcome
Regression: Output a scalar
• Stock Market Forecast
• Self-driving Car
• Product Recommendation (eg: Amazon)
! = Dow Jones Industrial
Average at tomorrow
! =
! =
Control wheel angle
User A, Product B Possibility to buy
11. Machine Learning
LEC 01: Welcome
Linear prediction on test data
Likewise, for a point that we have never seen before, say x = [50 20],
we generate the following prediction:
y(x) = [1 50 20] = 1 + 0 + 10 = 11.
17. Machine Learning
LEC 01: Welcome
Next lecture
• In the next lecture, we learn to derive the linear regression estimates by
maximum likelihood with multivariate Gaussian distributions.
• Please go to the Wikipedia page for the multivariate Normal
distribution beforehand or, even better, read Chapter 4 of Kevin
Murphy’s book and attempt the first three questions of the exercise
sheet.