Online Learning
Felipe Almeida
Rio Machine Learning Meetup August / 2016
Introduction, overview and examples
Structure
● Introduction
● Use cases
● Types of Targets
● Approaches
● Current Trends / Related Areas
● Links
2
Introduction
● Online Learning is generally described as doing machine learning
in a streaming data setting, i.e. training a model in consecutive
rounds
3
Introduction
● Online Learning is generally described as doing machine learning
in a streaming data setting, i.e. training a model in consecutive
rounds
○ At the beginning of each round the algorithm is presented with
an input sample, and must perform a prediction
4
Introduction
● Online Learning is generally described as doing machine learning
in a streaming data setting, i.e. training a model in consecutive
rounds
○ At the beginning of each round the algorithm is presented with
an input sample, and must perform a prediction
○ The algorithm verifies whether its prediction was correct or
incorrect, and feeds this information back into the model, for
subsequent rounds
5
Introduction
6
Whereas in batch (or offline) learning you have
access to the whole dataset to train on
x1
x2
x3
... xd
y
x1
x2
x3
... xd
y
x1
x2
x3
... xd
y
x1
x2
x3
... xd
y
x1
x2
x3
... xd
y
D features
Nsamples
Trained model
Batch training
Introduction
7
In online learning your model evolves as you
see new data, one example at a time
x1
x2
x3
... xd
y
Timeincreases
Input data at time t Online update
model at time t
x1
x2
x3
... xd
y
Input data at time t+1 Online update
model at time t+1
x1
x2
x3
... xd
y
Input data at time t+2 Online update
model at time t+2
Introduction
● In other words, you need to answer a sequence of questions but
you only have access to answers to previous questions
8
Introduction
● In other words, you need to answer a sequence of questions but
you only have access to answers to previous questions
9
Adapted from Shalev-Shwartz 2012
Introduction
● In other words, you need to answer a sequence of questions but
you only have access to answers to previous questions
10
Each t represents a
trial
Adapted from Shalev-Shwartz 2012
Introduction
● In other words, you need to answer a sequence of questions but
you only have access to answers to previous questions
11
Each t represents a
trial
Adapted from Shalev-Shwartz 2012
xt
is the input data for
trial t
Introduction
● In other words, you need to answer a sequence of questions but
you only have access to answers to previous questions
12
Each t represents a
trial
Adapted from Shalev-Shwartz 2012
xt
is the input data for
trial t
pt
is the prediction for the
label corresponding to xt
Introduction
● In other words, you need to answer a sequence of questions but
you only have access to answers to previous questions
13
Each t represents a
trial
Adapted from Shalev-Shwartz 2012
xt
is the input data for
trial t
pt
is the prediction for the
label corresponding to xt
This is the difference
between what you
predicted and the actual
label
Introduction
● In other words, you need to answer a sequence of questions but
you only have access to answers to previous questions
14
Each t represents a
trial
xt
is the input data for
trial t
pt
is the prediction for the
label corresponding to xt
This is the difference
between what you
predicted and the actual
label
After computing the loss, the algorithm will use this information to
update the model generating the predictions for the next trials
Introduction: Main Concepts
The main objective of online learning algorithms is to minimize the
regret.
15
Introduction: Main Concepts
The main objective of online learning algorithms is to minimize the
regret.
The regret is the difference between the performance of:
● the online algorithm
● an ideal algorithm that has been able to train on the whole data
seen so far, in batch fashion
16
Introduction: Main Concepts
In other words, the main objective of an online machine learning
algorithm is to try to perform as closely to the corresponding offline
algorithm as possible.
17
Introduction: Main Concepts
In other words, the main objective of an online machine learning
algorithm is to try to perform as closely to the corresponding offline
algorithm as possible.
This is measured by the regret.
18
Use cases
● Online algorithms are useful in at least two scenarios:
19
Use cases
● Online algorithms are useful in at least two scenarios:
● When your data is too large to fit in the memory
○ So you need to train your model one example at a time
20
Use cases
● Online algorithms are useful in at least two scenarios:
● When your data is too large to fit in the memory
○ So you need to train your model one example at a time
● When new data is constantly being generated, and/or is
dependent upon time
21
Use cases
Some cases where data is constantly being generated and you need
quick predictions:
22
Use cases
Some cases where data is constantly being generated and you need
quick predictions:
● Real-time Recommendation
● Fraud Detection
● Spam detection
● Portfolio Selection
● Online ad placement
23
Types of Targets
There are two main ways to think about an online learning problem, as far as
the target functions (that we are trying to learn) are concerned:
24
Types of Targets
There are two main ways to think about an online learning problem, as far as
the target functions (that we are trying to learn) are concerned:
● Stationary Targets
○ The target function you are trying to learn does not change over time
(but may be stochastic)
25
Types of Targets
There are two main ways to think about an online learning problem, as far as
the target functions (that we are trying to learn) are concerned:
● Stationary Targets
○ The target function you are trying to learn does not change over time
(but may be stochastic)
● Dynamic Targets
○ The process that is generating input sample data is assumed to be
non-stationary (i.e. may change over time)
○ The process may even be adapting to your model (i.e. in an
adversarial manner) 26
Example I: Stationary Targets
For stationary targets, the input-generating process is a single, but
unknown, function of the attributes.
27
Example I: Stationary Targets
For stationary targets, the input-generating process is a single, but
unknown, function of the attributes.
● Ex.: Some process generates, at each time step t, inputs of the
form (x1
, x2
, x3
) where each attribute is a bit, and the label y is the
result of (x1
v (x2
^ x3
)):
28
Example I: Stationary Targets
For stationary targets, the input-generating process is a single, but
unknown, function of the attributes.
● Ex.: Some process generates, at each time step t, inputs of the
form (x1
, x2
, x3
) where each attribute is a bit, and the label y is the
result of (x1
v (x2
^ x3
)):
29
1 0 1 1
0 1 1 1
0 0 0 0
1 0 0 1
Input at time t
Input at time t+1
Input at time t+2
Input at time t+3
x1
x2
x3
y
Example I: Stationary Targets
For stationary targets, the input-generating process is a single, but
unknown, function of the attributes.
● Ex.: Some process generates, at each time step t, inputs of the
form (x1
, x2
, x3
) where each attribute is a bit, and the label y is the
result of (x1
v (x2
^ x3
)):
30
1 0 1 1
0 1 1 1
0 0 0 0
1 0 0 1
Input at time t
Input at time t+1
Input at time t+2
Input at time t+3
x1
x2
x3
y
From the point of view of the
online learning algorithm,
obviously!
Example II: Dynamic Targets
Spam filtering
The objective of a good spam filter is to accurately model the following
function:
31
Example II: Dynamic Targets
Spam filtering
The objective of a good spam filter is to accurately model the following
function:
32
x1
x2
x3
x4
... xd {0,1}
Example II: Dynamic Targets
Spam filtering
The objective of a good spam filter is to accurately model the following
function:
33
Features extracted from an e-mail
Spam or not spam?
model
x1
x2
x3
x4
... xd {0,1}
Example II: Dynamic Targets
Spam filtering
● So suppose you have learned that the presence of the word
“Dollars” implies that an e-mail is likely spam.
34
Example II: Dynamic Targets
Spam filtering
● So suppose you have learned that the presence of the word
“Dollars” implies that an e-mail is likely spam.
● Spammers have noticed that their scammy e-mails are falling prey
to spam filters so they change tactics:
35
Example II: Dynamic Targets
Spam filtering
● So suppose you have learned that the presence of the word
“Dollars” implies that an e-mail is likely spam.
● Spammers have noticed that their scammy e-mails are falling prey
to spam filters so they change tactics:
○ So instead of using the word “Dollars” they start using the
word “Euro”, which fools your filter but also accomplishes
their goal (have people read the e-mail)
36
Approaches
A couple of approaches have been proposed in the literature:
● Online Learning from Expert Advice
37
Approaches
A couple of approaches have been proposed in the literature:
● Online Learning from Expert Advice
● Online Learning from Examples
38
Approaches
A couple of approaches have been proposed in the literature:
● Online Learning from Expert Advice
● Online Learning from Examples
● General algorithms that may also be used in the online setting
39
Approaches: Expert Advice
In this approach, it is assumed that the algorithm has multiple oracles
(or experts at its disposal), which its can use to produce its output, in
each trial.
40
Approaches: Expert Advice
In this approach, it is assumed that the algorithm has multiple oracles
(or experts at its disposal), which its can use to produce its output, in
each trial.
In other words, the task of this online algorithm is simply to learn which
of the experts it should use.
41
Approaches: Expert Advice
In this approach, it is assumed that the algorithm has multiple oracles
(or experts at its disposal), which its can use to produce its output, in
each trial.
In other words, the task of this online algorithm is simply to learn which
of the experts it should use.
The simplest algorithm in this realm is the Randomized Weighted
Majority Algorithm
42
Approaches: Expert Advice
Randomized Weighted Majority Algorithm
● Every expert has a weight (starting at 1)
● For every trial:
○ Randomly select an expert (larger weight => more likely)
○ Use that expert’s output as your prediction
○ Verify the correct answer
○ For each expert:
■ If it was mistaken, decrease its weight by a constant factor
43
Approaches: Learning from Examples
Learning from examples is different from using Expert Advice inasmuch
as we don’t need to previously define prebuild experts we will derive
our predictions from.
44
Approaches: Learning from Examples
Learning from examples is different from using Expert Advice inasmuch
as we don’t need to previously define prebuild experts we will derive
our predictions from.
We need, however, to know what Concept Class we want to search
over.
45
Approaches: Learning from Examples
A Concept Class is a set of of functions (concepts) that subscribe to a
particular model.
46
Approaches: Learning from Examples
A Concept Class is a set of of functions (concepts) that subscribe to a
particular model.
Some examples of concept classes are:
● The set of all monotone disjunctions of N variables
● The set of non-monotone disjunctions of N variables
● Decision lists with N variables
● Linear threshold formulas
● DNF (disjunctive normal form) formulas
47
Approaches: Learning from Examples
The Winnow Algorithm is one example of a simple algorithm that
learns monotone disjunctions online.
48
Approaches: Learning from Examples
The Winnow Algorithm is one example of a simple algorithm that
learns monotone disjunctions online.
In other words, it learns any concept (function), provided the concept
belongs to the Concept Class of monotone disjunctions.
49
Approaches: Learning from Examples
The Winnow Algorithm is one example of a simple algorithm that
learns monotone disjunctions online.
In other words, it learns any concept (function), provided the concept
belongs to the Concept Class of monotone disjunctions.
It also uses weights, as in the previous example.
50
Approaches: Learning from Examples
Winnow algorithm
● Initialize all weights (w1
, w2
,... wn
) to 1
● Given a new example:
○ Predict 1 if wT
x > n
○ Predict 0 otherwise
● Check the true answer
● For each input attribute:
○ If algorithm predicted 1 but true answer was 0, double the value of
every weight corresponding to an attribute = 1
○ If algorithm predicted 0 but true answer was 1, halve the value of
each weight corresponding to an attribute = 0 51
Approaches: Learning from Examples
Winnow algorithm
● Initialize all weights (w1
, w2
,... wn
) to 1
● Given a new example:
○ Predict 1 if wT
x > n
○ Predict 0 otherwise
● Check the true answer
● For each input attribute:
○ If algorithm predicted 1 but true answer was 0, double the value of
every weight corresponding to an attribute = 1
○ If algorithm predicted 0 but true answer was 1, halve the value of
each weight corresponding to an attribute = 0 52
We aimed too low,
let’s try to make our
guess higher
Approaches: Learning from Examples
Winnow algorithm
● Initialize all weights (w1
, w2
,... wn
) to 1
● Given a new example:
○ Predict 1 if wT
x > n
○ Predict 0 otherwise
● Check the true answer
● For each input attribute:
○ If algorithm predicted 1 but true answer was 0, double the value of
every weight corresponding to an attribute = 1
○ If algorithm predicted 0 but true answer was 1, halve the value of
each weight corresponding to an attribute = 0 53
We aimed too high,
let’s try to make our
guess lower
Approaches: Other Approaches
More general algorithms can also be used in an online setting, such as:
54
Approaches: Other Approaches
More general algorithms can also be used in an online setting, such as:
● Stochastic Gradient Descent
55
Approaches: Other Approaches
More general algorithms can also be used in an online setting, such as:
● Stochastic Gradient Descent
● Perceptron Learning Algorithm
56
Current Trends / Related Areas
Adversarial Machine Learning
● Refers to scenarios where your input-generating process is an
adaptive adversary
● Applications in:
○ Information Security
○ Games
57
Current Trends / Related Areas
One-shot Learning
● Refers to scenarios where your must perform predictions after
seeing just a few, or even a single input sample
● Applications in:
○ Computer Vision
58
Links
● http://ttic.uchicago.edu/~shai/papers/ShalevThesis07.pdf
● Blum 1998 Survey Paper
● UofW CSE599S Online Learning
● Machine Learning From Streaming data
● Twitter Fighting Spam with BotMaker
● CS229 - Online Learning Lecture
● Building a real time Recommendation Engine with Data Science
● Online Optimization for Large Scale Machine Learning by prof A.
Banerjee
● Learning, Regret, Minimization and Equilibria
59
Links
● https://github.com/JohnLangford/vowpal_wabbit
● Shai Shalev-Shwartz 2011 Survey Paper
● Hoi et al 2014 - LIBOL
● MIT 6.883 Online Methods in Machine Learning
60

Online Machine Learning: introduction and examples

  • 1.
    Online Learning Felipe Almeida RioMachine Learning Meetup August / 2016 Introduction, overview and examples
  • 2.
    Structure ● Introduction ● Usecases ● Types of Targets ● Approaches ● Current Trends / Related Areas ● Links 2
  • 3.
    Introduction ● Online Learningis generally described as doing machine learning in a streaming data setting, i.e. training a model in consecutive rounds 3
  • 4.
    Introduction ● Online Learningis generally described as doing machine learning in a streaming data setting, i.e. training a model in consecutive rounds ○ At the beginning of each round the algorithm is presented with an input sample, and must perform a prediction 4
  • 5.
    Introduction ● Online Learningis generally described as doing machine learning in a streaming data setting, i.e. training a model in consecutive rounds ○ At the beginning of each round the algorithm is presented with an input sample, and must perform a prediction ○ The algorithm verifies whether its prediction was correct or incorrect, and feeds this information back into the model, for subsequent rounds 5
  • 6.
    Introduction 6 Whereas in batch(or offline) learning you have access to the whole dataset to train on x1 x2 x3 ... xd y x1 x2 x3 ... xd y x1 x2 x3 ... xd y x1 x2 x3 ... xd y x1 x2 x3 ... xd y D features Nsamples Trained model Batch training
  • 7.
    Introduction 7 In online learningyour model evolves as you see new data, one example at a time x1 x2 x3 ... xd y Timeincreases Input data at time t Online update model at time t x1 x2 x3 ... xd y Input data at time t+1 Online update model at time t+1 x1 x2 x3 ... xd y Input data at time t+2 Online update model at time t+2
  • 8.
    Introduction ● In otherwords, you need to answer a sequence of questions but you only have access to answers to previous questions 8
  • 9.
    Introduction ● In otherwords, you need to answer a sequence of questions but you only have access to answers to previous questions 9 Adapted from Shalev-Shwartz 2012
  • 10.
    Introduction ● In otherwords, you need to answer a sequence of questions but you only have access to answers to previous questions 10 Each t represents a trial Adapted from Shalev-Shwartz 2012
  • 11.
    Introduction ● In otherwords, you need to answer a sequence of questions but you only have access to answers to previous questions 11 Each t represents a trial Adapted from Shalev-Shwartz 2012 xt is the input data for trial t
  • 12.
    Introduction ● In otherwords, you need to answer a sequence of questions but you only have access to answers to previous questions 12 Each t represents a trial Adapted from Shalev-Shwartz 2012 xt is the input data for trial t pt is the prediction for the label corresponding to xt
  • 13.
    Introduction ● In otherwords, you need to answer a sequence of questions but you only have access to answers to previous questions 13 Each t represents a trial Adapted from Shalev-Shwartz 2012 xt is the input data for trial t pt is the prediction for the label corresponding to xt This is the difference between what you predicted and the actual label
  • 14.
    Introduction ● In otherwords, you need to answer a sequence of questions but you only have access to answers to previous questions 14 Each t represents a trial xt is the input data for trial t pt is the prediction for the label corresponding to xt This is the difference between what you predicted and the actual label After computing the loss, the algorithm will use this information to update the model generating the predictions for the next trials
  • 15.
    Introduction: Main Concepts Themain objective of online learning algorithms is to minimize the regret. 15
  • 16.
    Introduction: Main Concepts Themain objective of online learning algorithms is to minimize the regret. The regret is the difference between the performance of: ● the online algorithm ● an ideal algorithm that has been able to train on the whole data seen so far, in batch fashion 16
  • 17.
    Introduction: Main Concepts Inother words, the main objective of an online machine learning algorithm is to try to perform as closely to the corresponding offline algorithm as possible. 17
  • 18.
    Introduction: Main Concepts Inother words, the main objective of an online machine learning algorithm is to try to perform as closely to the corresponding offline algorithm as possible. This is measured by the regret. 18
  • 19.
    Use cases ● Onlinealgorithms are useful in at least two scenarios: 19
  • 20.
    Use cases ● Onlinealgorithms are useful in at least two scenarios: ● When your data is too large to fit in the memory ○ So you need to train your model one example at a time 20
  • 21.
    Use cases ● Onlinealgorithms are useful in at least two scenarios: ● When your data is too large to fit in the memory ○ So you need to train your model one example at a time ● When new data is constantly being generated, and/or is dependent upon time 21
  • 22.
    Use cases Some caseswhere data is constantly being generated and you need quick predictions: 22
  • 23.
    Use cases Some caseswhere data is constantly being generated and you need quick predictions: ● Real-time Recommendation ● Fraud Detection ● Spam detection ● Portfolio Selection ● Online ad placement 23
  • 24.
    Types of Targets Thereare two main ways to think about an online learning problem, as far as the target functions (that we are trying to learn) are concerned: 24
  • 25.
    Types of Targets Thereare two main ways to think about an online learning problem, as far as the target functions (that we are trying to learn) are concerned: ● Stationary Targets ○ The target function you are trying to learn does not change over time (but may be stochastic) 25
  • 26.
    Types of Targets Thereare two main ways to think about an online learning problem, as far as the target functions (that we are trying to learn) are concerned: ● Stationary Targets ○ The target function you are trying to learn does not change over time (but may be stochastic) ● Dynamic Targets ○ The process that is generating input sample data is assumed to be non-stationary (i.e. may change over time) ○ The process may even be adapting to your model (i.e. in an adversarial manner) 26
  • 27.
    Example I: StationaryTargets For stationary targets, the input-generating process is a single, but unknown, function of the attributes. 27
  • 28.
    Example I: StationaryTargets For stationary targets, the input-generating process is a single, but unknown, function of the attributes. ● Ex.: Some process generates, at each time step t, inputs of the form (x1 , x2 , x3 ) where each attribute is a bit, and the label y is the result of (x1 v (x2 ^ x3 )): 28
  • 29.
    Example I: StationaryTargets For stationary targets, the input-generating process is a single, but unknown, function of the attributes. ● Ex.: Some process generates, at each time step t, inputs of the form (x1 , x2 , x3 ) where each attribute is a bit, and the label y is the result of (x1 v (x2 ^ x3 )): 29 1 0 1 1 0 1 1 1 0 0 0 0 1 0 0 1 Input at time t Input at time t+1 Input at time t+2 Input at time t+3 x1 x2 x3 y
  • 30.
    Example I: StationaryTargets For stationary targets, the input-generating process is a single, but unknown, function of the attributes. ● Ex.: Some process generates, at each time step t, inputs of the form (x1 , x2 , x3 ) where each attribute is a bit, and the label y is the result of (x1 v (x2 ^ x3 )): 30 1 0 1 1 0 1 1 1 0 0 0 0 1 0 0 1 Input at time t Input at time t+1 Input at time t+2 Input at time t+3 x1 x2 x3 y From the point of view of the online learning algorithm, obviously!
  • 31.
    Example II: DynamicTargets Spam filtering The objective of a good spam filter is to accurately model the following function: 31
  • 32.
    Example II: DynamicTargets Spam filtering The objective of a good spam filter is to accurately model the following function: 32 x1 x2 x3 x4 ... xd {0,1}
  • 33.
    Example II: DynamicTargets Spam filtering The objective of a good spam filter is to accurately model the following function: 33 Features extracted from an e-mail Spam or not spam? model x1 x2 x3 x4 ... xd {0,1}
  • 34.
    Example II: DynamicTargets Spam filtering ● So suppose you have learned that the presence of the word “Dollars” implies that an e-mail is likely spam. 34
  • 35.
    Example II: DynamicTargets Spam filtering ● So suppose you have learned that the presence of the word “Dollars” implies that an e-mail is likely spam. ● Spammers have noticed that their scammy e-mails are falling prey to spam filters so they change tactics: 35
  • 36.
    Example II: DynamicTargets Spam filtering ● So suppose you have learned that the presence of the word “Dollars” implies that an e-mail is likely spam. ● Spammers have noticed that their scammy e-mails are falling prey to spam filters so they change tactics: ○ So instead of using the word “Dollars” they start using the word “Euro”, which fools your filter but also accomplishes their goal (have people read the e-mail) 36
  • 37.
    Approaches A couple ofapproaches have been proposed in the literature: ● Online Learning from Expert Advice 37
  • 38.
    Approaches A couple ofapproaches have been proposed in the literature: ● Online Learning from Expert Advice ● Online Learning from Examples 38
  • 39.
    Approaches A couple ofapproaches have been proposed in the literature: ● Online Learning from Expert Advice ● Online Learning from Examples ● General algorithms that may also be used in the online setting 39
  • 40.
    Approaches: Expert Advice Inthis approach, it is assumed that the algorithm has multiple oracles (or experts at its disposal), which its can use to produce its output, in each trial. 40
  • 41.
    Approaches: Expert Advice Inthis approach, it is assumed that the algorithm has multiple oracles (or experts at its disposal), which its can use to produce its output, in each trial. In other words, the task of this online algorithm is simply to learn which of the experts it should use. 41
  • 42.
    Approaches: Expert Advice Inthis approach, it is assumed that the algorithm has multiple oracles (or experts at its disposal), which its can use to produce its output, in each trial. In other words, the task of this online algorithm is simply to learn which of the experts it should use. The simplest algorithm in this realm is the Randomized Weighted Majority Algorithm 42
  • 43.
    Approaches: Expert Advice RandomizedWeighted Majority Algorithm ● Every expert has a weight (starting at 1) ● For every trial: ○ Randomly select an expert (larger weight => more likely) ○ Use that expert’s output as your prediction ○ Verify the correct answer ○ For each expert: ■ If it was mistaken, decrease its weight by a constant factor 43
  • 44.
    Approaches: Learning fromExamples Learning from examples is different from using Expert Advice inasmuch as we don’t need to previously define prebuild experts we will derive our predictions from. 44
  • 45.
    Approaches: Learning fromExamples Learning from examples is different from using Expert Advice inasmuch as we don’t need to previously define prebuild experts we will derive our predictions from. We need, however, to know what Concept Class we want to search over. 45
  • 46.
    Approaches: Learning fromExamples A Concept Class is a set of of functions (concepts) that subscribe to a particular model. 46
  • 47.
    Approaches: Learning fromExamples A Concept Class is a set of of functions (concepts) that subscribe to a particular model. Some examples of concept classes are: ● The set of all monotone disjunctions of N variables ● The set of non-monotone disjunctions of N variables ● Decision lists with N variables ● Linear threshold formulas ● DNF (disjunctive normal form) formulas 47
  • 48.
    Approaches: Learning fromExamples The Winnow Algorithm is one example of a simple algorithm that learns monotone disjunctions online. 48
  • 49.
    Approaches: Learning fromExamples The Winnow Algorithm is one example of a simple algorithm that learns monotone disjunctions online. In other words, it learns any concept (function), provided the concept belongs to the Concept Class of monotone disjunctions. 49
  • 50.
    Approaches: Learning fromExamples The Winnow Algorithm is one example of a simple algorithm that learns monotone disjunctions online. In other words, it learns any concept (function), provided the concept belongs to the Concept Class of monotone disjunctions. It also uses weights, as in the previous example. 50
  • 51.
    Approaches: Learning fromExamples Winnow algorithm ● Initialize all weights (w1 , w2 ,... wn ) to 1 ● Given a new example: ○ Predict 1 if wT x > n ○ Predict 0 otherwise ● Check the true answer ● For each input attribute: ○ If algorithm predicted 1 but true answer was 0, double the value of every weight corresponding to an attribute = 1 ○ If algorithm predicted 0 but true answer was 1, halve the value of each weight corresponding to an attribute = 0 51
  • 52.
    Approaches: Learning fromExamples Winnow algorithm ● Initialize all weights (w1 , w2 ,... wn ) to 1 ● Given a new example: ○ Predict 1 if wT x > n ○ Predict 0 otherwise ● Check the true answer ● For each input attribute: ○ If algorithm predicted 1 but true answer was 0, double the value of every weight corresponding to an attribute = 1 ○ If algorithm predicted 0 but true answer was 1, halve the value of each weight corresponding to an attribute = 0 52 We aimed too low, let’s try to make our guess higher
  • 53.
    Approaches: Learning fromExamples Winnow algorithm ● Initialize all weights (w1 , w2 ,... wn ) to 1 ● Given a new example: ○ Predict 1 if wT x > n ○ Predict 0 otherwise ● Check the true answer ● For each input attribute: ○ If algorithm predicted 1 but true answer was 0, double the value of every weight corresponding to an attribute = 1 ○ If algorithm predicted 0 but true answer was 1, halve the value of each weight corresponding to an attribute = 0 53 We aimed too high, let’s try to make our guess lower
  • 54.
    Approaches: Other Approaches Moregeneral algorithms can also be used in an online setting, such as: 54
  • 55.
    Approaches: Other Approaches Moregeneral algorithms can also be used in an online setting, such as: ● Stochastic Gradient Descent 55
  • 56.
    Approaches: Other Approaches Moregeneral algorithms can also be used in an online setting, such as: ● Stochastic Gradient Descent ● Perceptron Learning Algorithm 56
  • 57.
    Current Trends /Related Areas Adversarial Machine Learning ● Refers to scenarios where your input-generating process is an adaptive adversary ● Applications in: ○ Information Security ○ Games 57
  • 58.
    Current Trends /Related Areas One-shot Learning ● Refers to scenarios where your must perform predictions after seeing just a few, or even a single input sample ● Applications in: ○ Computer Vision 58
  • 59.
    Links ● http://ttic.uchicago.edu/~shai/papers/ShalevThesis07.pdf ● Blum1998 Survey Paper ● UofW CSE599S Online Learning ● Machine Learning From Streaming data ● Twitter Fighting Spam with BotMaker ● CS229 - Online Learning Lecture ● Building a real time Recommendation Engine with Data Science ● Online Optimization for Large Scale Machine Learning by prof A. Banerjee ● Learning, Regret, Minimization and Equilibria 59
  • 60.
    Links ● https://github.com/JohnLangford/vowpal_wabbit ● ShaiShalev-Shwartz 2011 Survey Paper ● Hoi et al 2014 - LIBOL ● MIT 6.883 Online Methods in Machine Learning 60