The document outlines 5 essential ideas in machine learning: gradient descent, the kernel trick, dimensionality reduction, deep neural networks, and reinforcement learning. It provides a brief overview of each concept, including that gradient descent is used to optimize loss functions, the kernel trick maps data to higher dimensions, dimensionality reduction preserves predictive power while reducing dimensions, deep learning automates feature selection, and reinforcement learning adds the Q-function to maximize value over time. The document encourages subscribing to the author's newsletter to learn more machine learning tips and tricks.
1. 5 ESSENTIAL IDEAS
IN MACHINE LEARNING
CARL DAWSON 19 MARCH 2019
DEVELOP YOUR ML INTUITION
BY MASTERING THESE FIVE IDEAS
MachineLearningPhD.com
2. GRADIENT DESCENT
Computing the optimal values for the
parameters is computationally expensive.
By optimising (reducing) a loss function
instead we save on a lot of the
computational work but still arrive at
good solutions (in most cases).
Gradient descent is used in gradient
boosting and neural networks (via
backpropagation).
To learn more: Check out Andrew
Ng’s Machine Learning course on
Coursera.
5 ESSENTIAL IDEAS IN MACHINE LEARNING
3. THE KERNEL TRICK
If your data isn’t separable in the space
its in, increase the dimensionality until
you’re able to find a separating surface.
By creating higher order terms you can
quickly calculate an observation’s
position in the higher dimensional space.
The ‘kernel’ is the function that maps the
lower dimensional data to the higher
dimensions.
To learn more: Try Christopher Bishop’s
book Pattern Recognition and Machine
Learning.
5 ESSENTIAL IDEAS IN MACHINE LEARNING
4. DIMENSIONALITY REDUCTION
Includes Principal Component Analysis and
Linear Discriminant Analysis.
A set of statistical procedures which reduce
the dimensionality of the data without losing
predictive power.
PCA, for example, iteratively selects
orthogonal transformations of the data with
the highest variance.
To learn more: The Elements of Statistical
Learning uses Dimensionality Reduction
throughout to improve other algorithms.
5 ESSENTIAL IDEAS IN MACHINE LEARNING
5. DEEP NEURAL NETWORKS
Determining features from images and textual
data is complex.
Deep learning has abated the necessity of
feature engineering by automagically
selecting hierarchical features.
In order to implement Deep Neural Networks
you’ll have to understand Gradient Descent,
matrix operations (including dot products)
and Logistic Regression (for the sigmoid
activation function).
To learn more: Check out the Deep Learning
Book by Ian Goodfellow (and others), it’s not
for the faint of heart, but it’s very good!
5 ESSENTIAL IDEAS IN MACHINE LEARNING
6. REINFORCEMENT LEARNING
Deep Reinforcement Learning adds the Q
function, which the network aims
to maximise the value of over its lifespan.
Reinforcement Learning is often considered to
be the third branch of machine learning after
Supervised and Unsupervised learning.
It’s use in robotics and self-driving cars makes
it a very attractive area of machine learning
to study.
To learn more: Read Chris Watkins’ PhD
thesis which introduced Q-Learning.
5 ESSENTIAL IDEAS IN MACHINE LEARNING
7. GET MACHINE LEARNING
TIPS AND TRICKS IN YOUR
INBOX
CARL DAWSON 19 MARCH 2019
MachineLearningPhD.com
SUBSCRIBE TODAY