Machine Learning
The High Interest Credit Card of Technical Debt
The Market Intelligence Company of the Digital World
$65M
Funding
2007
Founded
6
Offices
300+
Employees
Market Intelligence Company
of the Digital World
The
Learned | Estimated
Machine learning: The high interest credit card of technical debt
(2014)
Hidden technical debt in machine learning systems (2015)
D. Sculley, Gary Holt, Daniel Golovin, Eugene Davydov, Todd Phillips,
Dietmar Ebner, Vinay Chaudhary, Michael Young, Jean-Francois
Crespo, Dan Dennison
A Few Words About The Papers
Systems engineering papers
About Machine Learning systems
Give a lot of names to a lot of things (which we know is hard)
We found them in 2015 and liked them a lot
A Few Words About The Papers
What is ML and what is Technical Debt?
Sources of Technical Debt in ML systems
Mitigation
Today
Machine Learning
Train Predict
Data
Algorithm
Data
Data
Hyperparameters
Why Machine Learning?
Allows us to convert data to software
We often already have data
Some problems are hard or impossible to
solve otherwise
http://xkcd.com/1425/
A metaphor for the long term costs of moving quickly
Lack of testing, bad modularity, non-redundant systems, etc.
Somewhat similar to fiscal debt
There are good reasons to take it, but it needs to be serviced
Hidden technical debt - a special, evil, variant
Technical Debt
Boundary Erosion
Components, interfaces, all that jazz
Think MVC, microservices
Implicitly assumed in “good” systems
Makes components easy to:
- Test
- Change
- Reason about
- Monitor
Boundaries in Systems Engineering
Entanglement
ML System “Inputs”
Learning
settings
Hyperparams
Data prep
settings
Real world
inputs
?
Other systems
outputs
Issues
Change in distribution of any input influences
all outputs
Adding/Removing a feature changes the
model and output distribution
Any configuration parameter is just as
coupled
Retraining not reproducible
Changing Anything Changes Everything (CACE)
Model parts
Correction Cascades
Output Output Output
We sometimes use output from an existing
model as a feature to get a small correction
Easier than training a new model
Easier than teaching an existing model new
tricks
A
B
C
Correction Cascades
Output Output Output
Improvement
Degradation
Model improvements cause degradation
down the line
Corrections might lead to an “improvement
deadlock”
A
B
C
Outputs of ML systems include:
- Predictions
- Weights and other state
Data is easy to consume
In turn makes it hard to improve model
May create hidden feedback loops
Undeclared Consumers
Data Dependencies
Data Dependencies
Regular system
ComponentInput
Component Output
ComponentInput Output
Data Dependencies
Regular system
ComponentInput
Component Output
ComponentInput Output
Data dependency
Data Dependencies
Regular system
ComponentInput
ML System
Component Output
ComponentInput Output
Input Logs
Weights
Output
ML
Component
Trainer
PredictInput Output
Data dependency
Data Dependencies
Regular system
ComponentInput
ML System
Component Output
ComponentInput Output
Input Logs
Weights
Output
ML
Component
Trainer
PredictInput Output
Data dependency
Features for training can be outputs of other models
IDF tables, Word2Vec embeddings..
Logs, intermediate results, monitoring feeds..
But if they change schema?
Stop being updated?
Disappear?
Unstable Dependencies
Legacy features - Nobody maintains / wants to maintain them
Bundled features - Not sure which ones we need
Correlated features - May mask features with actual causality
Epsilon features - Improve the result by very little
Underutilized Dependencies
Software Issues
ML as Software
Actual machine learning is a lot more than modeling
Configuration
Data
Collection
Feature
Extraction
Data
Verification
Process Management
Resource
Management
Analysis Tools Serving
Infrastructure
Monitoring
Model
Glue code
Software issues
Pipeline jungles
Dead experimental paths
Abstraction Debt
Multiple languages, systems, packages
Need to configure/test/deploy:
- Hyper-parameters
- Schema (including semantics)
- Data dependencies
Hard to understand or visualize what changed
Configuration Debt
Interactions
Experience has shown that the external world is rarely stable
- Word2Vec for “Pokemon”
- Population of Sudan
- Gregorian dates of holidays
Makes monitoring essential.
Makes testing very hard.
Changes in The External World
A model sometimes influences its future training data
This is common in:
- Recommendation systems
- Ad placement
- Systems that affect the physical world
Especially hard if change is gradual and model updates infrequently
Direct Feedback Loops
Often happen when two different systems learn from each other’s
outputs
Classic example is algo-trading
But two independent content generation systems running on the
same page also qualify
Undeclared consumers can be a cause
Hidden Feedback Loops
..But wait
There’s More!
Data Testing
Reproducibility
Process Management
Cultural Debt
More!
Mitigation
How easily can an entirely new algorithmic approach be tested at full
scale?
What is the transitive closure of all data dependencies?
How precisely can the impact of a new change to the system be
measured?
Be Aware of Debt
Does improving one model or signal degrade others?
How quickly can new members of the team be brought up to speed?
Be Aware of Debt
Merge mature models into a single, well defined, well tested system
Prune experimental code paths
Make each feature count
Monitor
Map consumers
Test data
Paying per model
Configuration system - versioned, comprehensive, testable
Data dependency system - versioned, comprehensive, testable
Consolidate mature systems
Reproducibility is awesome
Pay off cultural debt
Paying for Systems
Other Questions?
We Are Hiring!
similarweb.com/corp/jobs

Machine learning the high interest credit card of technical debt [PWL]

  • 1.
    Machine Learning The HighInterest Credit Card of Technical Debt The Market Intelligence Company of the Digital World
  • 2.
  • 4.
  • 5.
    Machine learning: Thehigh interest credit card of technical debt (2014) Hidden technical debt in machine learning systems (2015) D. Sculley, Gary Holt, Daniel Golovin, Eugene Davydov, Todd Phillips, Dietmar Ebner, Vinay Chaudhary, Michael Young, Jean-Francois Crespo, Dan Dennison A Few Words About The Papers
  • 6.
    Systems engineering papers AboutMachine Learning systems Give a lot of names to a lot of things (which we know is hard) We found them in 2015 and liked them a lot A Few Words About The Papers
  • 7.
    What is MLand what is Technical Debt? Sources of Technical Debt in ML systems Mitigation Today
  • 8.
  • 9.
    Why Machine Learning? Allowsus to convert data to software We often already have data Some problems are hard or impossible to solve otherwise http://xkcd.com/1425/
  • 10.
    A metaphor forthe long term costs of moving quickly Lack of testing, bad modularity, non-redundant systems, etc. Somewhat similar to fiscal debt There are good reasons to take it, but it needs to be serviced Hidden technical debt - a special, evil, variant Technical Debt
  • 11.
  • 12.
    Components, interfaces, allthat jazz Think MVC, microservices Implicitly assumed in “good” systems Makes components easy to: - Test - Change - Reason about - Monitor Boundaries in Systems Engineering
  • 13.
    Entanglement ML System “Inputs” Learning settings Hyperparams Dataprep settings Real world inputs ? Other systems outputs Issues Change in distribution of any input influences all outputs Adding/Removing a feature changes the model and output distribution Any configuration parameter is just as coupled Retraining not reproducible Changing Anything Changes Everything (CACE) Model parts
  • 14.
    Correction Cascades Output OutputOutput We sometimes use output from an existing model as a feature to get a small correction Easier than training a new model Easier than teaching an existing model new tricks A B C
  • 15.
    Correction Cascades Output OutputOutput Improvement Degradation Model improvements cause degradation down the line Corrections might lead to an “improvement deadlock” A B C
  • 16.
    Outputs of MLsystems include: - Predictions - Weights and other state Data is easy to consume In turn makes it hard to improve model May create hidden feedback loops Undeclared Consumers
  • 17.
  • 18.
  • 19.
    Data Dependencies Regular system ComponentInput ComponentOutput ComponentInput Output Data dependency
  • 20.
    Data Dependencies Regular system ComponentInput MLSystem Component Output ComponentInput Output Input Logs Weights Output ML Component Trainer PredictInput Output Data dependency
  • 21.
    Data Dependencies Regular system ComponentInput MLSystem Component Output ComponentInput Output Input Logs Weights Output ML Component Trainer PredictInput Output Data dependency
  • 22.
    Features for trainingcan be outputs of other models IDF tables, Word2Vec embeddings.. Logs, intermediate results, monitoring feeds.. But if they change schema? Stop being updated? Disappear? Unstable Dependencies
  • 23.
    Legacy features -Nobody maintains / wants to maintain them Bundled features - Not sure which ones we need Correlated features - May mask features with actual causality Epsilon features - Improve the result by very little Underutilized Dependencies
  • 24.
  • 25.
    ML as Software Actualmachine learning is a lot more than modeling Configuration Data Collection Feature Extraction Data Verification Process Management Resource Management Analysis Tools Serving Infrastructure Monitoring Model Glue code
  • 26.
    Software issues Pipeline jungles Deadexperimental paths Abstraction Debt Multiple languages, systems, packages
  • 27.
    Need to configure/test/deploy: -Hyper-parameters - Schema (including semantics) - Data dependencies Hard to understand or visualize what changed Configuration Debt
  • 28.
  • 29.
    Experience has shownthat the external world is rarely stable - Word2Vec for “Pokemon” - Population of Sudan - Gregorian dates of holidays Makes monitoring essential. Makes testing very hard. Changes in The External World
  • 30.
    A model sometimesinfluences its future training data This is common in: - Recommendation systems - Ad placement - Systems that affect the physical world Especially hard if change is gradual and model updates infrequently Direct Feedback Loops
  • 31.
    Often happen whentwo different systems learn from each other’s outputs Classic example is algo-trading But two independent content generation systems running on the same page also qualify Undeclared consumers can be a cause Hidden Feedback Loops
  • 32.
  • 33.
  • 34.
  • 35.
    How easily canan entirely new algorithmic approach be tested at full scale? What is the transitive closure of all data dependencies? How precisely can the impact of a new change to the system be measured? Be Aware of Debt
  • 36.
    Does improving onemodel or signal degrade others? How quickly can new members of the team be brought up to speed? Be Aware of Debt
  • 37.
    Merge mature modelsinto a single, well defined, well tested system Prune experimental code paths Make each feature count Monitor Map consumers Test data Paying per model
  • 38.
    Configuration system -versioned, comprehensive, testable Data dependency system - versioned, comprehensive, testable Consolidate mature systems Reproducibility is awesome Pay off cultural debt Paying for Systems
  • 39.
  • 40.