Automatic Forecasting at Scale
Sean J. Taylor
12 Aug 2015
Joint Statistical Meetings
Many Forecasting Problems at Facebook
• capacity planning: servers, switches, people, even food
• user / advertiser growth
• revenue
• goal setting for teams (with respect to forecast)
• detecting anomalies
• “trending” units
Business Time Series Have Similar Attributes
• comprised by multiple “units”

(e.g. countries, users, advertisers, hardware units)
• units are “born” at different times, can exit the sample
• growth curves are common (e.g. saturating a market)
• complex, human-scale seasonality, holidays and events
• structural breaks as exogenous changes happen

(e.g. new products, redesigns, site outages)
• missing data
Thousands or millions of forecasts?
Mo’ Data,
Mo’ Problems
A second (and third) kind of scale:
many people and problems
Goal is to create technology:
people who are not experts can
use it easily with few decisions
and trust the output
Technology?
Results of my search for forecasting advice
▪ carefully clean, scale, and fix missingness in data
▪ try many kinds of models
▪ use model selection procedures based on (penalized)
goodness-of-fit or just ocular goodness-of-fit
▪ lots of tacit knowledge involved — experienced
forecasters have earned a lot of credibility
Why is building a forecaster harder
than building a classifier?
How most people build a classifier:
1. Choose a loss function.
2. Gather as much data as possible and construct
potentially useful features.
3. Train models using different amounts of regularization.
4. Choose the one that predicts the best out-of-sample
using some cross-validation procedure.
With a flexible enough learner, the only time a human
needs to intervene is during feature construction!
Forecasting as (special) supervised learning
Features
▪ state-features constructed from historical data
▪ time-based features for seasonality, events, etc.
Training
▪ off-the-shelf regularized regression (glmnet, VW)
Model selection
▪ use simulated forecasts to estimate expected loss
When you have a
really awesome
hammer,
make everything
look like a
regularized
regression.
arg min ky X k2
+ 1k k1 + 2k k2
A flexible extrapolation model
Fixed-Horizon Forecasting Regression
Regressors are generated from paste state:
yt+H = f(yt, yt 1, yt 2, . . .)
yt+H = ↵yt +
1
t
tX
i=1
yi
Last Value
Mean Value
State features from one-sided kernel-
weighted statistics
t
Can use any weighted statistic to generate features:
mean, variance, quantiles, etc.
past
data
Assumption: local smoothness
Assume parameters vary smoothly over forecast horizon
(same as assuming forecast is locally smooth).
yt+H = ↵H · yt + H ·
1
t
tX
i=1
yi
Different model
for each horizon
↵H
H0 Max
Horizon
Adding Seasonality Features
Add components to the model that represent deterministic
functions of time:
▪ trend
▪ cyclic cubic splines for yearly seasonality
▪ day-of-week, day-of-year, hour-of-day dummy variables
▪ smooth curves around known holidays
yt+H = f(yt, yt 1, yt 2, . . .) + g(t + H)
t y
1/1 5
1/2 9
1/3 16
t last mean
1/1 - -
1/2 5 5
1/3 9 7
t+H y Mon Tue
1/1 5 1 0
1/2 9 0 1
1/3 14 0 0
State Features
Target +
Time Features
t+H t H y last mean Mon Tues
1/2 1/1 1 5 - - 0 1
1/3 1/1 2 9 - - 0 0
1/3 1/2 1 14 5 5 0 0
Input Data for Training
Series
Making it hierarchical
We want to borrow information about processes across
units. Huge opportunity because:
1. We know more about “new” time series than we think if
we are willing to assume they are generated from a
similar process.
2. The more examples from a family of time series
processes we have, the better we are able to learn about
its structure. Example: stock market.
3. Precision gains from borrowing information.
One weird trick for hierarchical models
Common
Features
United States
Canada
Mexico
Global parameters Unit-specific
yi,t+H = ↵yt +
1
t
tX
i=1
yi + ↵iyt + i
1
t
tX
i=1
yi
Training
▪ BIG DATA: optimization-based techniques are difficult to
use here because
▪ Online learning using SGD/Adagrad/Adadelta work well
here AND we can update parameters for different loss
functions and regularization parameters at the same time.
▪ Other bonus for online learning: incremental learning on
data sorted by time!
Model Selection via
Forward Cross-Validation
We have two sets of hyper-parameters:
1. regularization of the model coefficients.
2. amount of differencing we do before
fitting.
Just like in the classification version of the
problem, we choose the model that
empirically forecasts the best by selecting
K simulated forecast dates.
Training
stream
Testing
stream
Checkpoint
Model
1
2 1
23
Predictive Intervals with Quantile Regression
Very important to quantify uncertainty about a forecast.
Often we’d prefer that people not even look at the point
estimates.
Once you’re in the land of regularized linear regression, we
can get predictive intervals simply by changing loss
function to quantile loss.
Directly optimizing the model for the correct amount of
empirical coverage!
Computational Tricks
▪ online feature scaling
▪ feature hashing
▪ stochastic gradient descent (and Adagrad, Adadelta)
▪ fitting several models simultaneously on the same data
stream
Scaling to More People/Problems
1. Start with a single use-case and nail it.
2. Parameterize that solution — adding new problems
should simple be configuration.
3. Work on model/fitting procedure, then run all previous
models for diagnostics.
4. Provide easy tools for model criticism — top predictive
errors, examples with under/over coverage, etc.
Conclusions
▪ Different kinds of “at scale” — people and problems are
more important than size of data
▪ If a model/technique is hard to use, it’s worth thinking
about what it would take for a non-expert to use it.
▪ Making problems look like regularized linear regression is
GREAT.
▪ Forecasting can be made into a very special kind of
supervised learning.
▪ Email me with comments/feedback: sjt@fb.com

Automatic Forecasting at Scale

  • 1.
    Automatic Forecasting atScale Sean J. Taylor 12 Aug 2015 Joint Statistical Meetings
  • 3.
    Many Forecasting Problemsat Facebook • capacity planning: servers, switches, people, even food • user / advertiser growth • revenue • goal setting for teams (with respect to forecast) • detecting anomalies • “trending” units
  • 4.
    Business Time SeriesHave Similar Attributes • comprised by multiple “units”
 (e.g. countries, users, advertisers, hardware units) • units are “born” at different times, can exit the sample • growth curves are common (e.g. saturating a market) • complex, human-scale seasonality, holidays and events • structural breaks as exogenous changes happen
 (e.g. new products, redesigns, site outages) • missing data
  • 5.
    Thousands or millionsof forecasts?
  • 6.
  • 7.
    A second (andthird) kind of scale: many people and problems
  • 8.
    Goal is tocreate technology: people who are not experts can use it easily with few decisions and trust the output
  • 9.
  • 10.
    Results of mysearch for forecasting advice ▪ carefully clean, scale, and fix missingness in data ▪ try many kinds of models ▪ use model selection procedures based on (penalized) goodness-of-fit or just ocular goodness-of-fit ▪ lots of tacit knowledge involved — experienced forecasters have earned a lot of credibility
  • 11.
    Why is buildinga forecaster harder than building a classifier?
  • 12.
    How most peoplebuild a classifier: 1. Choose a loss function. 2. Gather as much data as possible and construct potentially useful features. 3. Train models using different amounts of regularization. 4. Choose the one that predicts the best out-of-sample using some cross-validation procedure. With a flexible enough learner, the only time a human needs to intervene is during feature construction!
  • 13.
    Forecasting as (special)supervised learning Features ▪ state-features constructed from historical data ▪ time-based features for seasonality, events, etc. Training ▪ off-the-shelf regularized regression (glmnet, VW) Model selection ▪ use simulated forecasts to estimate expected loss
  • 14.
    When you havea really awesome hammer, make everything look like a regularized regression. arg min ky X k2 + 1k k1 + 2k k2
  • 15.
  • 16.
    Fixed-Horizon Forecasting Regression Regressorsare generated from paste state: yt+H = f(yt, yt 1, yt 2, . . .) yt+H = ↵yt + 1 t tX i=1 yi Last Value Mean Value
  • 17.
    State features fromone-sided kernel- weighted statistics t Can use any weighted statistic to generate features: mean, variance, quantiles, etc. past data
  • 18.
    Assumption: local smoothness Assumeparameters vary smoothly over forecast horizon (same as assuming forecast is locally smooth). yt+H = ↵H · yt + H · 1 t tX i=1 yi Different model for each horizon ↵H H0 Max Horizon
  • 19.
    Adding Seasonality Features Addcomponents to the model that represent deterministic functions of time: ▪ trend ▪ cyclic cubic splines for yearly seasonality ▪ day-of-week, day-of-year, hour-of-day dummy variables ▪ smooth curves around known holidays yt+H = f(yt, yt 1, yt 2, . . .) + g(t + H)
  • 20.
    t y 1/1 5 1/29 1/3 16 t last mean 1/1 - - 1/2 5 5 1/3 9 7 t+H y Mon Tue 1/1 5 1 0 1/2 9 0 1 1/3 14 0 0 State Features Target + Time Features t+H t H y last mean Mon Tues 1/2 1/1 1 5 - - 0 1 1/3 1/1 2 9 - - 0 0 1/3 1/2 1 14 5 5 0 0 Input Data for Training Series
  • 21.
    Making it hierarchical Wewant to borrow information about processes across units. Huge opportunity because: 1. We know more about “new” time series than we think if we are willing to assume they are generated from a similar process. 2. The more examples from a family of time series processes we have, the better we are able to learn about its structure. Example: stock market. 3. Precision gains from borrowing information.
  • 22.
    One weird trickfor hierarchical models Common Features United States Canada Mexico Global parameters Unit-specific yi,t+H = ↵yt + 1 t tX i=1 yi + ↵iyt + i 1 t tX i=1 yi
  • 23.
    Training ▪ BIG DATA:optimization-based techniques are difficult to use here because ▪ Online learning using SGD/Adagrad/Adadelta work well here AND we can update parameters for different loss functions and regularization parameters at the same time. ▪ Other bonus for online learning: incremental learning on data sorted by time!
  • 24.
    Model Selection via ForwardCross-Validation We have two sets of hyper-parameters: 1. regularization of the model coefficients. 2. amount of differencing we do before fitting. Just like in the classification version of the problem, we choose the model that empirically forecasts the best by selecting K simulated forecast dates. Training stream Testing stream Checkpoint Model 1 2 1 23
  • 25.
    Predictive Intervals withQuantile Regression Very important to quantify uncertainty about a forecast. Often we’d prefer that people not even look at the point estimates. Once you’re in the land of regularized linear regression, we can get predictive intervals simply by changing loss function to quantile loss. Directly optimizing the model for the correct amount of empirical coverage!
  • 26.
    Computational Tricks ▪ onlinefeature scaling ▪ feature hashing ▪ stochastic gradient descent (and Adagrad, Adadelta) ▪ fitting several models simultaneously on the same data stream
  • 27.
    Scaling to MorePeople/Problems 1. Start with a single use-case and nail it. 2. Parameterize that solution — adding new problems should simple be configuration. 3. Work on model/fitting procedure, then run all previous models for diagnostics. 4. Provide easy tools for model criticism — top predictive errors, examples with under/over coverage, etc.
  • 28.
    Conclusions ▪ Different kindsof “at scale” — people and problems are more important than size of data ▪ If a model/technique is hard to use, it’s worth thinking about what it would take for a non-expert to use it. ▪ Making problems look like regularized linear regression is GREAT. ▪ Forecasting can be made into a very special kind of supervised learning. ▪ Email me with comments/feedback: sjt@fb.com