When it comes to studying, Machines and Students have one thing in common: Examinations. To perform well on their final evaluations, humans require taking classes, reading books and solving practice quizzes. Similarly, machines need artificial intelligence to memorize data, infer feature correlations, and pass validation standards in order to solve almost any problem. In this quick introductory session, we'll walk through these analogies to learn the core concepts behind Machine Learning, and why it works so well!
The 7 Things I Know About Cyber Security After 25 Years | April 2024
ย
Machine learning: A Walk Through School Exams
1. Machine Learning and
A Walk through School Exams!
Introduction to
Artificial Intelligence
2. SPEAKER IMAGE
Ramsha Siddiqui
Associate Data Scientist (NLP)
i2c inc. Lahore, Pakistan
โ ๐ฌ Conversational AI and Dialog Systems.
โ ๐ฉ๐ป Google Developers Group, Lahore.
โ ๐ฅ WTM and WDA Scholar, 2018-2020.
โ ๐ FAST NUCES, Lahore - Spring, 2019.
โ โ๏ธ Global UGRAD Alumnus - UIW, Texas.
โ ๐ผ Co-Founder of Startup - FunKadaa.
3. What is Machine Learning?
Machine Learning is the application of
Artificial Intelligence that allows machines to
learn and improve from experience, without
being explicitly programmed, to solve almost
any problem in the world.
14. 3. Model Selection / Construction
An algorithm / mathematical derivation
containing variables that can be derived /
learned from Training Data is called a
Machine Learning Model.
The values of the variables are saved after
training to be used for predictions later.
15. 3. Model Selection / Construction
Selecting the right โkind of modelโ for your
Machine Learning is also important, because
feature-learning / data-understanding is an
internal process for machines, and certain
types of problems are more well-suited for
certain types of algorithms.
Letโs look at the two broadest types!
16. Supervised
Learning
When a Modelโs Expected
Outputs are defined, itโs called a
Supervised Learning Problem. It
mainly has two sub-categories:
โ Classification (categorizing
input data into classes)
โ Regression (predicting a
continuous / real value instead
of an output class-label)
18. Example: Facial Recognition
Convolutional Neural Networks (CNNs) have been classically useful for
the task of identifying different components / features of images.
20. Example: Speech Recognition
Sequence to Sequence Networks (Seq2Seq) have so far shown
promising results for this task (with updated research and
architectures being released almost every day!)
22. Example: Covid Projection
Time-Series Forecasting is a common problem in Machine Learning,
and all models used for that task, can be trained here for comparison.
23. A Note: Some of these problems such as
โ Image Classification
โ Text Classification, etc.
youโll find that some Model Architectures are also problem-
favorable, as theyโve shown good results before. Itโs always
wise for you to do some Research beforehand to find
good Model Implementations / Datasets on the task youโre
working on, before starting from scratch.
24. Unsupervised
Learning
When a Modelโs Expected
Outputs are undefined, and
input-data only needs to be
clustered or segmented into
somewhat meaningful groups, itโs
called Unsupervised Learning.
Example:
โ K-Means Algorithm (Finds
Cluster Centroids based on
the no. of centers defined = K)
25. Example: Topic Modeling
Training a model to learn the topic of a document based on the correlations of
different words used inside it, without knowing prior what those topics are.
27. 4. Model Training
Feeding your Input Data to your Model for learning - is
called Model Training. Hereโs some things you may wanna
do before that:
โ Data Folds: Splitting your Training Data into two sets:
One for Training on, and one for Validating
Performance.
โ Epochs: The number of times your model sees the
Training Data (revising it).
โ Model Parameters: Take out a set of possible
parameter values that you want to tweak in your model.
28. Example: w1*x1 + w2*x2 + w3*x3 = y
where for every example, we have input-values โxโ and model-variables
โwโ.
29. 5. Model Testing
Alas, models canโt escape examinations either. After Training a Model, we want
to see Model Predictions, and compare the values that model got with those it
got incorrect, and evaluate overall performance.
Choosing the right performance metric matters and varies per problem!
31. On average, most humans and models are able
to learn almost any task following this method.
In terms of performance comparisons
however, sometimes we:
โ Underachieve
โ Overachieve
32. Underachieving
Reasons:
โ Unseen Test Data: If the data in the examination is totally
unrelated to the Train Data, then the model will get confused.
Evenly distribute your data between Train and Test Sets.
โ Insufficient Training Data: Perhaps the amount of Training
Data is insufficient / too difficult to learn anything useful
from.
Add more data or try new features.
โ Model Architecture: The Model selected for a particular
task may not be the right choice for it.
33. Overachieving
Reasons:
โ Randomness: Sometimes itโs just dumb luck. (Models are
initialized with Random Variable Values).
โ Model Pretraining: If your Model was trained on a task
similar to this one before and has a good memory, it may
perform well even if your Training Data was pretty small /
useless.
A lot of research is going into pre-training these days!
34. Model Predictions
In order to trust that our modelโs are working correctly, itโs
always a good idea to provide additional explainability with the
model output like:
โ Confidence Score(s)
โ Input Features