CC mmds talk 2106

Charles Martin
Charles MartinData Scientist & Machine Learning Expert
calculation | consulting
why deep learning works:
perspectives from theoretical chemistry
(TM)
c|c
(TM)
charles@calculationconsulting.com
calculation|consulting
MMDS 2016
why deep learning works:
perspectives from theoretical chemistry
(TM)
charles@calculationconsulting.com
calculation | consulting why deep learning works
Who Are We?
c|c
(TM)
Dr. Charles H. Martin, PhD
University of Chicago, Chemical Physics
NSF Fellow in Theoretical Chemistry
Over 10 years experience in applied Machine Learning
Developed ML algos for Demand Media; the first $1B IPO since Google
Tech: Aardvark (now Google), eHow, GoDaddy, …
Wall Street: BlackRock
Fortune 500: Big Pharma, Telecom, eBay
www.calculationconsulting.com
charles@calculationconsulting.com
(TM)
3
Data Scientists are Different
c|c
(TM)
theoretical physics
machine learning specialist
(TM)
4
experimental physics
data scientist
engineer
software, browser tech, dev ops, …
not all techies are the same
calculation | consulting why deep learning works
c|c
(TM)
Problem: How can SGD possibly work?
Aren’t Neural Nets non-Convex ?!
(TM)
5
calculation | consulting why deep learning works
can Spin Glass models suggest why ?
what other models are out there ?
expected observed ?
c|c
(TM)
(TM)
6
calculation | consulting why deep learning works
Outline

Random Energy Model (REM)
Temperature, regularization and the glass transition
extending REM: Spin Glass of Minimal Frustration
protein folding analogy: Funneled Energy Landscapes
example: Dark Knowledge
Recent work: Spin Glass models for Deep Nets
c|c
(TM)
(TM)
7
calculation | consulting why deep learning works
Warning

condensed matter theory is about qualitative analogies
we may seek a toy model
a mean field theory
a phenomenological description
c|c
(TM)
What problem is Deep Learning solving ?
(TM)
8
calculation | consulting why deep learning works
minimize cross-entropy
https://www.ics.uci.edu/~pjsados.pdf
c|c
(TM)
Problem: What is a good theoretical
model for deep networks ?
(TM)
9
calculation | consulting why deep learning works
p-spin spherical glass
LeCun … 2015
L Hamiltonian (Energy function)
X Gaussian random variables
w real valued (spins) , spherical constraint
H >= 3 (p)
can be solved analytically, simulated easily
c|c
(TM)
What is a spin glass ?
(TM)
10
calculation | consulting why deep learning works
Frustration: constraints that can not be satisfied
J = X = weights
S = w = spins
Energetically: all spins should be paired
c|c
(TM)
why p-spin spherical glass ?
(TM)
11
calculation | consulting why deep learning works
crudely: deep networks (effectively) have no local minima !
local minima
k=1 critical points
floor / ground state
k = 2 critical points
k = 3 critical points
the critical points are ordered
saddle points
c|c
(TM)
why p-spin spherical glass ?
(TM)
12
calculation | consulting why deep learning works
crudely: deep networks (effectively) have no local minima !
http://cims.nyu.edu/~achoroma/NonFlash/Papers/PAPER_AMMGY.pdf
ap
c|c
(TM)
(TM)
13
calculation | consulting why deep learning works
any local minima will do; the ground state is a state of overtraining
good generalization
overtraining
Early Stopping: to avoid the ground state ?
c|c
(TM)
(TM)
14
calculation | consulting why deep learning works
it’s easy to find the ground state; it’s hard to generalize ?
Early Stopping: to avoid the ground state ?
c|c
(TM)
Current Interpretation

(TM)
15
calculation | consulting why deep learning works
•finding the ground state is easy (sic); generalizing is hard
•finding the ground state is irrelevant: any local minima will do
•the ground state is a state over training
c|c
(TM)
recent p-spin spherical glass results
(TM)
16
calculation | consulting why deep learning works
actually: recent results (2013) on the behavior
(distribution of critical points, concentration of the means)
of an isotropic random function on a high dimensional manifold
require: the variables actually concentrate on their means
the weights are drawn from isotropic random function
related to: old results TAP solutions (1977)
# critical points ~ TAP complexity
avoid local minima? : increase Temperature
harder problem: low Temp behavior of spin glass
c|c
(TM)
What problem is Deep Learning solving ?
(TM)
17
calculation | consulting why deep learning works
minimize cross-entropy of output layer
entropic effects : not just min energy
more like min free energy (divergence)
Statistical Physics and InformationTheory: Neri Merhav
i.e. variational auto encoders
c|c
(TM)
What problem is Deep Learning solving ?
(TM)
18
calculation | consulting why deep learning works
Restricted Boltzmann Machine
can define free energy directly
A Practical Guide toTraining Restricted Boltzmann Machines, Hinton
c|c
(TM)
What problem is Deep Learning solving ?
(TM)
19
calculation | consulting why deep learning works
Restricted Boltzmann Machine
trade off between energy and entropy
min free energy directly
A Practical Guide toTraining Restricted Boltzmann Machines, Hinton
c|c
(TM)
(TM)
20
calculation | consulting why deep learning works
https://web.stanford.edu/~montanar/RESEARCH/BOOK/partB.pdf
infinite limit of p-spin spherical glass
A related approach: Random Energy Model (REM)
c|c
(TM)
Random Energy Model (REM)
(TM)
21
calculation | consulting why deep learning works
ground state is governed by ExtremeValue Statistics
http://guava.physics.uiuc.edu/~nigel/courses/563/essays2000/pogorelov.pdf
http://scitation.aip.org/content/aip/journal/jcp/111/14/10.1063/1.479951
old result from protein folding theory
c|c
(TM)
REM: What is Temperature ?
(TM)
22
calculation | consulting why deep learning works
We can use statistical mechanics to analyze known algorithms
I don’t mean in the traditional sense of algorithmic analysis
take Ej as the objective = loss function + regularizer
study Z: form a mean field theory;
take limits N -> inf, T -> 0
c|c
(TM)
REM: What is Temperature ?
(TM)
23
calculation | consulting why deep learning works
let E(T) by the effective energy
E(T) = E/T ~ sum of weights*activations
as T -> 0, E(T) effective energies diverge; weights explode
Temperature is a proxy for weight constraints
T sets the Energy Scale
c|c
(TM)
Temperature: as Weight Constraints
(TM)
24
calculation | consulting why deep learning works
•traditional weight regularization
•max norm constraints (i.e. w/dropout)
•batch norm regularization (2015)
we avoid situations when the weights explode
in deep networks, we temper the weights
and the distribution of the activations (i.e local entropy)
c|c
(TM)
REM: a toy model for real Glasses

(TM)
25
calculation | consulting why deep learning works
but it is believed that entropy collapse ‘drives’ the glass transition
the glass transition is not well understood
c|c
(TM)
what is a real (structural) Glass ?

(TM)
26
calculation | consulting why deep learning works
Sand + Fire = Glass
c|c
(TM)
what is a real (structural) Glass ?

(TM)
27
calculation | consulting why deep learning works
all liquids can be made into glasses
if we cool then fast enough
the glass transition is not a normal phase transition
not the melting point
arrangement of atoms is amorphous; not completely random
different cooling rates produce different glassy states
universal phenomena; not universal physics
molecular details affect the thermodynamics
c|c
(TM)
REM: the Glass Transition

(TM)
28
calculation | consulting why deep learning works
Entropy collapses when T <~ Tc
Phase Diagram: entropy density
energy density
free energy density
https://web.stanford.edu/~montanar/RESEARCH/BOOK/partB.pdf
c|c
(TM)
REM: Dynamics on the Energy Landscape

(TM)
29
calculation | consulting why deep learning works
let us assume some states trap the solver for some time;
of course, there is a great effort to design solvers that can avoid traps
c|c
(TM)
Energy Landscapes: and Protein Folding 

(TM)
30
calculation | consulting why deep learning works
let us assume some states trap the solver in state E(j) for a short time
and the transitions E(j) -> E(j-1) are governed by finite, reversible transitions
(i.e. SGD oscillates back and forth for a while)
classic result(s): for T near the glass Temp (Tc)
the traversal times are slower than exponential !
in a physical system, like a protein or polymer,
it would take longer than the known lifetime of the universe
to find the ground (folded) state
c|c
(TM)
Protein Folding: the Levinthal Paradox 

(TM)
31
calculation | consulting why deep learning works
folding could take longer than the known lifetime of the universe ?
c|c
(TM)
(TM)
32
calculation | consulting why deep learning works
http://arxiv.org/pdf/cond-mat/9904060v2.pdf
Old analogy between Protein folding and Hopfield Associative Memories
Natural pattern recognition could
• use a mechanism with a glass Temp (Tc) that is as low as possible
• avoid the glass transition entirely, via energetics
Nature (i.e. folding) can not operate this way !
Protein Folding: around the Levinthal Paradox 

c|c
(TM)
Spin Glasses: Minimizing Frustration

(TM)
33
calculation | consulting why deep learning works
http://www.nature.com/nsmb/journal/v4/n11/pdf/nsb1197-871.pdf
c|c
(TM)
Spin Glasses: Minimizing Frustration

(TM)
34
calculation | consulting why deep learning works
http://www.nature.com/nsmb/journal/v4/n11/pdf/nsb1197-871.pdf
c|c
(TM)
Spin Glasses: vs Disordered FerroMagnets

(TM)
35
calculation | consulting why deep learning works
http://arxiv.org/pdf/cond-mat/9904060v2.pdf
c|c
(TM)
the Spin Glass of Minimal Frustration 

(TM)
36
calculation | consulting why deep learning works
REM + strongly correlated ground state = no glass transition
https://arxiv.org/pdf/1312.7283.pdf
c|c
(TM)
the Spin Glass of Minimal Frustration 

(TM)
37
calculation | consulting why deep learning works
Training a model induces an energy gap, with few local minima
http://arxiv.org/pdf/1312.0867v1.pdf
c|c
(TM)
Energy Funnels: Entropy vs Energy 

(TM)
38
calculation | consulting why deep learning works
there is a tradeoff between Energy and Entropy minimization
c|c
(TM)
Energy Landscape Theory of Protein Folding
(TM)
39
calculation | consulting why deep learning works
there is a tradeoff between Energy and Entropy minimization
c|c
(TM)
(TM)
40
calculation | consulting why deep learning works
Avoids the glass transition by having more favorable energetics
Levinthal paradox
glassy surface
vanishing gradients
Energy Landscape Theory of Protein Folding
funneled landscape
rugged convexity
energy / entropy tradeoff
c|c
(TM)
RBMs: Entropy Energy Tradeoff
(TM)
41
calculation | consulting why deep learning works
RBM on MNIST Aligned for comparison: entropy - 200
Entropy drops off much faster than total Free Energy
c|c
(TM)
Dark Knowledge: an Energy Funnel ?

(TM)
42
calculation | consulting why deep learning works
784 -> 800 -> 800 -> 10MLP on MNIST
Distilled
10,000 test cases, 10 classes
99 errors
same entropy (capacity); better loss function
fit to ensemble soft-max probabilities
146 errors
784 -> 800 -> 800 -> 10
c|c
(TM)
Adversarial Deep Nets: an Energy Funnel ?

(TM)
43
calculation | consulting why deep learning works
Discriminator learns a complex loss function
Generator: fake data
Discriminator: fake vs real ?
http://soumith.ch/eyescream/
c|c
(TM)
(TM)
44
calculation | consulting why deep learning works
Summary

Random Energy Model (REM): simpler theoretical model
Glass Transition: temperature ~ weight constraints
extending REM: Spin Glass of Minimal Frustration
possible examples: Dark Knowledge
Funneled Energy Landscapes
Adversarial Deep Nets
(TM)
c|c
(TM)
c | c
charles@calculationconsulting.com
1 of 45

Recommended

Capsule Networks by
Capsule NetworksCapsule Networks
Capsule NetworksCharles Martin
6.9K views45 slides
Cc stat phys draft by
Cc stat phys draftCc stat phys draft
Cc stat phys draftCharles Martin
779 views45 slides
Why Deep Learning Works: Self Regularization in Deep Neural Networks by
Why Deep Learning Works: Self Regularization in Deep Neural NetworksWhy Deep Learning Works: Self Regularization in Deep Neural Networks
Why Deep Learning Works: Self Regularization in Deep Neural NetworksCharles Martin
972 views41 slides
Why Deep Learning Works: Self Regularization in Deep Neural Networks by
Why Deep Learning Works: Self Regularization in Deep Neural NetworksWhy Deep Learning Works: Self Regularization in Deep Neural Networks
Why Deep Learning Works: Self Regularization in Deep Neural NetworksCharles Martin
1K views42 slides
Why Deep Learning Works: Self Regularization in Deep Neural Networks by
Why Deep Learning Works: Self Regularization in Deep Neural Networks Why Deep Learning Works: Self Regularization in Deep Neural Networks
Why Deep Learning Works: Self Regularization in Deep Neural Networks Charles Martin
3.4K views49 slides
Why Deep Learning Works: Dec 13, 2018 at ICSI, UC Berkeley by
Why Deep Learning Works: Dec 13, 2018 at ICSI, UC BerkeleyWhy Deep Learning Works: Dec 13, 2018 at ICSI, UC Berkeley
Why Deep Learning Works: Dec 13, 2018 at ICSI, UC BerkeleyCharles Martin
744 views67 slides

More Related Content

What's hot

Search relevance by
Search relevanceSearch relevance
Search relevanceCharles Martin
422 views42 slides
Statistical Mechanics Methods for Discovering Knowledge from Production-Scale... by
Statistical Mechanics Methods for Discovering Knowledge from Production-Scale...Statistical Mechanics Methods for Discovering Knowledge from Production-Scale...
Statistical Mechanics Methods for Discovering Knowledge from Production-Scale...Charles Martin
739 views98 slides
Georgetown B-school Talk 2021 by
Georgetown B-school Talk  2021Georgetown B-school Talk  2021
Georgetown B-school Talk 2021Charles Martin
173 views39 slides
Weight watcher Bay Area ACM Feb 28, 2022 by
Weight watcher Bay Area ACM Feb 28, 2022 Weight watcher Bay Area ACM Feb 28, 2022
Weight watcher Bay Area ACM Feb 28, 2022 Charles Martin
734 views63 slides
ENS Macrh 2022.pdf by
ENS Macrh 2022.pdfENS Macrh 2022.pdf
ENS Macrh 2022.pdfCharles Martin
85 views55 slides
CARI-2020, Application of LSTM architectures for next frame forecasting in Se... by
CARI-2020, Application of LSTM architectures for next frame forecasting in Se...CARI-2020, Application of LSTM architectures for next frame forecasting in Se...
CARI-2020, Application of LSTM architectures for next frame forecasting in Se...Mokhtar SELLAMI
137 views20 slides

What's hot(20)

Statistical Mechanics Methods for Discovering Knowledge from Production-Scale... by Charles Martin
Statistical Mechanics Methods for Discovering Knowledge from Production-Scale...Statistical Mechanics Methods for Discovering Knowledge from Production-Scale...
Statistical Mechanics Methods for Discovering Knowledge from Production-Scale...
Charles Martin739 views
Georgetown B-school Talk 2021 by Charles Martin
Georgetown B-school Talk  2021Georgetown B-school Talk  2021
Georgetown B-school Talk 2021
Charles Martin173 views
Weight watcher Bay Area ACM Feb 28, 2022 by Charles Martin
Weight watcher Bay Area ACM Feb 28, 2022 Weight watcher Bay Area ACM Feb 28, 2022
Weight watcher Bay Area ACM Feb 28, 2022
Charles Martin734 views
CARI-2020, Application of LSTM architectures for next frame forecasting in Se... by Mokhtar SELLAMI
CARI-2020, Application of LSTM architectures for next frame forecasting in Se...CARI-2020, Application of LSTM architectures for next frame forecasting in Se...
CARI-2020, Application of LSTM architectures for next frame forecasting in Se...
Mokhtar SELLAMI137 views
Dimensionality reduction with UMAP by Jakub Bartczuk
Dimensionality reduction with UMAPDimensionality reduction with UMAP
Dimensionality reduction with UMAP
Jakub Bartczuk421 views
Cari2020 Parallel Hybridization for SAT: An Efficient Combination of Search S... by Mokhtar SELLAMI
Cari2020 Parallel Hybridization for SAT: An Efficient Combination of Search S...Cari2020 Parallel Hybridization for SAT: An Efficient Combination of Search S...
Cari2020 Parallel Hybridization for SAT: An Efficient Combination of Search S...
Mokhtar SELLAMI77 views
Neural Networks: Least Mean Square (LSM) Algorithm by Mostafa G. M. Mostafa
Neural Networks: Least Mean Square (LSM) AlgorithmNeural Networks: Least Mean Square (LSM) Algorithm
Neural Networks: Least Mean Square (LSM) Algorithm
Mostafa G. M. Mostafa14.3K views
Training and Inference for Deep Gaussian Processes by Keyon Vafa
Training and Inference for Deep Gaussian ProcessesTraining and Inference for Deep Gaussian Processes
Training and Inference for Deep Gaussian Processes
Keyon Vafa5.6K views
Graphical Model Selection for Big Data by Alexander Jung
Graphical Model Selection for Big DataGraphical Model Selection for Big Data
Graphical Model Selection for Big Data
Alexander Jung210 views
Spectral cnn by Brian Kim
Spectral cnnSpectral cnn
Spectral cnn
Brian Kim743 views
Different techniques for speech recognition by yashi saxena
Different  techniques for speech recognitionDifferent  techniques for speech recognition
Different techniques for speech recognition
yashi saxena508 views

Similar to CC mmds talk 2106

WeightWatcher LLM Update by
WeightWatcher LLM UpdateWeightWatcher LLM Update
WeightWatcher LLM UpdateCharles Martin
81 views44 slides
Semet Gecco06 by
Semet Gecco06Semet Gecco06
Semet Gecco06ysemet
323 views23 slides
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,... by
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...The Statistical and Applied Mathematical Sciences Institute
276 views40 slides
Essay On Hhjkhhj by
Essay On HhjkhhjEssay On Hhjkhhj
Essay On HhjkhhjAmy Bakewell
2 views45 slides
Mit2 72s09 lec02 (1) by
Mit2 72s09 lec02 (1)Mit2 72s09 lec02 (1)
Mit2 72s09 lec02 (1)Jasim Almuhandis
67 views41 slides
Discussion of PMCMC by
Discussion of PMCMCDiscussion of PMCMC
Discussion of PMCMCChristian Robert
1.2K views19 slides

Similar to CC mmds talk 2106(20)

Semet Gecco06 by ysemet
Semet Gecco06Semet Gecco06
Semet Gecco06
ysemet323 views
The Effects Of Performing Local Pwht And Considerations by Mary Brown
The Effects Of Performing Local Pwht And ConsiderationsThe Effects Of Performing Local Pwht And Considerations
The Effects Of Performing Local Pwht And Considerations
Mary Brown2 views
Quantum Business in Japanese Market by Yuichiro MInato
Quantum Business in Japanese MarketQuantum Business in Japanese Market
Quantum Business in Japanese Market
Yuichiro MInato298 views
2014.10.dartmouth by Qiqi Wang
2014.10.dartmouth2014.10.dartmouth
2014.10.dartmouth
Qiqi Wang621 views
Mathematics Colloquium, UCSC by dongwook159
Mathematics Colloquium, UCSCMathematics Colloquium, UCSC
Mathematics Colloquium, UCSC
dongwook1591.1K views
Introduction of Quantum Annealing and D-Wave Machines by Arithmer Inc.
Introduction of Quantum Annealing and D-Wave MachinesIntroduction of Quantum Annealing and D-Wave Machines
Introduction of Quantum Annealing and D-Wave Machines
Arithmer Inc.734 views
Dynamic mechanical analysis(DMA) by Manar Alfhad
Dynamic mechanical analysis(DMA)Dynamic mechanical analysis(DMA)
Dynamic mechanical analysis(DMA)
Manar Alfhad2.6K views
End of Sprint 5 by dm_work
End of Sprint 5End of Sprint 5
End of Sprint 5
dm_work361 views
EOS5 Demo by dm_work
EOS5 DemoEOS5 Demo
EOS5 Demo
dm_work316 views

More from Charles Martin

LLM avalanche June 2023.pdf by
LLM avalanche June 2023.pdfLLM avalanche June 2023.pdf
LLM avalanche June 2023.pdfCharles Martin
260 views13 slides
WeightWatcher Introduction by
WeightWatcher IntroductionWeightWatcher Introduction
WeightWatcher IntroductionCharles Martin
634 views8 slides
WeightWatcher Update: January 2021 by
WeightWatcher Update:  January 2021WeightWatcher Update:  January 2021
WeightWatcher Update: January 2021Charles Martin
165 views40 slides
Building AI Products: Delivery Vs Discovery by
Building AI Products: Delivery Vs Discovery Building AI Products: Delivery Vs Discovery
Building AI Products: Delivery Vs Discovery Charles Martin
349 views27 slides
AI and Machine Learning for the Lean Start Up by
AI and Machine Learning for the Lean Start UpAI and Machine Learning for the Lean Start Up
AI and Machine Learning for the Lean Start UpCharles Martin
374 views49 slides
Palo alto university rotary club talk Sep 29, 2107 by
Palo alto university rotary club talk Sep 29, 2107Palo alto university rotary club talk Sep 29, 2107
Palo alto university rotary club talk Sep 29, 2107Charles Martin
423 views24 slides

More from Charles Martin(9)

WeightWatcher Update: January 2021 by Charles Martin
WeightWatcher Update:  January 2021WeightWatcher Update:  January 2021
WeightWatcher Update: January 2021
Charles Martin165 views
Building AI Products: Delivery Vs Discovery by Charles Martin
Building AI Products: Delivery Vs Discovery Building AI Products: Delivery Vs Discovery
Building AI Products: Delivery Vs Discovery
Charles Martin349 views
AI and Machine Learning for the Lean Start Up by Charles Martin
AI and Machine Learning for the Lean Start UpAI and Machine Learning for the Lean Start Up
AI and Machine Learning for the Lean Start Up
Charles Martin374 views
Palo alto university rotary club talk Sep 29, 2107 by Charles Martin
Palo alto university rotary club talk Sep 29, 2107Palo alto university rotary club talk Sep 29, 2107
Palo alto university rotary club talk Sep 29, 2107
Charles Martin423 views
Applied machine learning for search engine relevance 3 by Charles Martin
Applied machine learning for search engine relevance 3Applied machine learning for search engine relevance 3
Applied machine learning for search engine relevance 3
Charles Martin1.7K views

Recently uploaded

NUTRITION IN BACTERIA.pdf by
NUTRITION IN BACTERIA.pdfNUTRITION IN BACTERIA.pdf
NUTRITION IN BACTERIA.pdfNandadulalSannigrahi
32 views14 slides
Small ruminant keepers’ knowledge, attitudes and practices towards peste des ... by
Small ruminant keepers’ knowledge, attitudes and practices towards peste des ...Small ruminant keepers’ knowledge, attitudes and practices towards peste des ...
Small ruminant keepers’ knowledge, attitudes and practices towards peste des ...ILRI
5 views6 slides
A giant thin stellar stream in the Coma Galaxy Cluster by
A giant thin stellar stream in the Coma Galaxy ClusterA giant thin stellar stream in the Coma Galaxy Cluster
A giant thin stellar stream in the Coma Galaxy ClusterSérgio Sacani
17 views14 slides
Structure of purines and pyrimidines - Jahnvi arora (11228108), mmdu ,mullana... by
Structure of purines and pyrimidines - Jahnvi arora (11228108), mmdu ,mullana...Structure of purines and pyrimidines - Jahnvi arora (11228108), mmdu ,mullana...
Structure of purines and pyrimidines - Jahnvi arora (11228108), mmdu ,mullana...jahnviarora989
6 views12 slides
A Ready-to-Analyze High-Plex Spatial Signature Development Workflow for Cance... by
A Ready-to-Analyze High-Plex Spatial Signature Development Workflow for Cance...A Ready-to-Analyze High-Plex Spatial Signature Development Workflow for Cance...
A Ready-to-Analyze High-Plex Spatial Signature Development Workflow for Cance...InsideScientific
78 views62 slides
Disinfectants & Antiseptic by
Disinfectants & AntisepticDisinfectants & Antiseptic
Disinfectants & AntisepticSanket P Shinde
62 views36 slides

Recently uploaded(20)

Small ruminant keepers’ knowledge, attitudes and practices towards peste des ... by ILRI
Small ruminant keepers’ knowledge, attitudes and practices towards peste des ...Small ruminant keepers’ knowledge, attitudes and practices towards peste des ...
Small ruminant keepers’ knowledge, attitudes and practices towards peste des ...
ILRI5 views
A giant thin stellar stream in the Coma Galaxy Cluster by Sérgio Sacani
A giant thin stellar stream in the Coma Galaxy ClusterA giant thin stellar stream in the Coma Galaxy Cluster
A giant thin stellar stream in the Coma Galaxy Cluster
Sérgio Sacani17 views
Structure of purines and pyrimidines - Jahnvi arora (11228108), mmdu ,mullana... by jahnviarora989
Structure of purines and pyrimidines - Jahnvi arora (11228108), mmdu ,mullana...Structure of purines and pyrimidines - Jahnvi arora (11228108), mmdu ,mullana...
Structure of purines and pyrimidines - Jahnvi arora (11228108), mmdu ,mullana...
jahnviarora9896 views
A Ready-to-Analyze High-Plex Spatial Signature Development Workflow for Cance... by InsideScientific
A Ready-to-Analyze High-Plex Spatial Signature Development Workflow for Cance...A Ready-to-Analyze High-Plex Spatial Signature Development Workflow for Cance...
A Ready-to-Analyze High-Plex Spatial Signature Development Workflow for Cance...
InsideScientific78 views
Pollination By Nagapradheesh.M.pptx by MNAGAPRADHEESH
Pollination By Nagapradheesh.M.pptxPollination By Nagapradheesh.M.pptx
Pollination By Nagapradheesh.M.pptx
MNAGAPRADHEESH19 views
RemeOs science and clinical evidence by PetrusViitanen1
RemeOs science and clinical evidenceRemeOs science and clinical evidence
RemeOs science and clinical evidence
PetrusViitanen147 views
Small ruminant keepers’ knowledge, attitudes and practices towards peste des ... by ILRI
Small ruminant keepers’ knowledge, attitudes and practices towards peste des ...Small ruminant keepers’ knowledge, attitudes and practices towards peste des ...
Small ruminant keepers’ knowledge, attitudes and practices towards peste des ...
ILRI7 views
Open Access Publishing in Astrophysics by Peter Coles
Open Access Publishing in AstrophysicsOpen Access Publishing in Astrophysics
Open Access Publishing in Astrophysics
Peter Coles1.2K views
Light Pollution for LVIS students by CWBarthlmew
Light Pollution for LVIS studentsLight Pollution for LVIS students
Light Pollution for LVIS students
CWBarthlmew9 views
CSF -SHEEBA.D presentation.pptx by SheebaD7
CSF -SHEEBA.D presentation.pptxCSF -SHEEBA.D presentation.pptx
CSF -SHEEBA.D presentation.pptx
SheebaD715 views
ELECTRON TRANSPORT CHAIN by DEEKSHA RANI
ELECTRON TRANSPORT CHAINELECTRON TRANSPORT CHAIN
ELECTRON TRANSPORT CHAIN
DEEKSHA RANI10 views
별헤는 사람들 2023년 12월호 전명원 교수 자료 by sciencepeople
별헤는 사람들 2023년 12월호 전명원 교수 자료별헤는 사람들 2023년 12월호 전명원 교수 자료
별헤는 사람들 2023년 12월호 전명원 교수 자료
sciencepeople58 views
Exploring the nature and synchronicity of early cluster formation in the Larg... by Sérgio Sacani
Exploring the nature and synchronicity of early cluster formation in the Larg...Exploring the nature and synchronicity of early cluster formation in the Larg...
Exploring the nature and synchronicity of early cluster formation in the Larg...
Sérgio Sacani910 views
Conventional and non-conventional methods for improvement of cucurbits.pptx by gandhi976
Conventional and non-conventional methods for improvement of cucurbits.pptxConventional and non-conventional methods for improvement of cucurbits.pptx
Conventional and non-conventional methods for improvement of cucurbits.pptx
gandhi97620 views

CC mmds talk 2106

  • 1. calculation | consulting why deep learning works: perspectives from theoretical chemistry (TM) c|c (TM) charles@calculationconsulting.com
  • 2. calculation|consulting MMDS 2016 why deep learning works: perspectives from theoretical chemistry (TM) charles@calculationconsulting.com
  • 3. calculation | consulting why deep learning works Who Are We? c|c (TM) Dr. Charles H. Martin, PhD University of Chicago, Chemical Physics NSF Fellow in Theoretical Chemistry Over 10 years experience in applied Machine Learning Developed ML algos for Demand Media; the first $1B IPO since Google Tech: Aardvark (now Google), eHow, GoDaddy, … Wall Street: BlackRock Fortune 500: Big Pharma, Telecom, eBay www.calculationconsulting.com charles@calculationconsulting.com (TM) 3
  • 4. Data Scientists are Different c|c (TM) theoretical physics machine learning specialist (TM) 4 experimental physics data scientist engineer software, browser tech, dev ops, … not all techies are the same calculation | consulting why deep learning works
  • 5. c|c (TM) Problem: How can SGD possibly work? Aren’t Neural Nets non-Convex ?! (TM) 5 calculation | consulting why deep learning works can Spin Glass models suggest why ? what other models are out there ? expected observed ?
  • 6. c|c (TM) (TM) 6 calculation | consulting why deep learning works Outline
 Random Energy Model (REM) Temperature, regularization and the glass transition extending REM: Spin Glass of Minimal Frustration protein folding analogy: Funneled Energy Landscapes example: Dark Knowledge Recent work: Spin Glass models for Deep Nets
  • 7. c|c (TM) (TM) 7 calculation | consulting why deep learning works Warning
 condensed matter theory is about qualitative analogies we may seek a toy model a mean field theory a phenomenological description
  • 8. c|c (TM) What problem is Deep Learning solving ? (TM) 8 calculation | consulting why deep learning works minimize cross-entropy https://www.ics.uci.edu/~pjsados.pdf
  • 9. c|c (TM) Problem: What is a good theoretical model for deep networks ? (TM) 9 calculation | consulting why deep learning works p-spin spherical glass LeCun … 2015 L Hamiltonian (Energy function) X Gaussian random variables w real valued (spins) , spherical constraint H >= 3 (p) can be solved analytically, simulated easily
  • 10. c|c (TM) What is a spin glass ? (TM) 10 calculation | consulting why deep learning works Frustration: constraints that can not be satisfied J = X = weights S = w = spins Energetically: all spins should be paired
  • 11. c|c (TM) why p-spin spherical glass ? (TM) 11 calculation | consulting why deep learning works crudely: deep networks (effectively) have no local minima ! local minima k=1 critical points floor / ground state k = 2 critical points k = 3 critical points the critical points are ordered saddle points
  • 12. c|c (TM) why p-spin spherical glass ? (TM) 12 calculation | consulting why deep learning works crudely: deep networks (effectively) have no local minima ! http://cims.nyu.edu/~achoroma/NonFlash/Papers/PAPER_AMMGY.pdf ap
  • 13. c|c (TM) (TM) 13 calculation | consulting why deep learning works any local minima will do; the ground state is a state of overtraining good generalization overtraining Early Stopping: to avoid the ground state ?
  • 14. c|c (TM) (TM) 14 calculation | consulting why deep learning works it’s easy to find the ground state; it’s hard to generalize ? Early Stopping: to avoid the ground state ?
  • 15. c|c (TM) Current Interpretation
 (TM) 15 calculation | consulting why deep learning works •finding the ground state is easy (sic); generalizing is hard •finding the ground state is irrelevant: any local minima will do •the ground state is a state over training
  • 16. c|c (TM) recent p-spin spherical glass results (TM) 16 calculation | consulting why deep learning works actually: recent results (2013) on the behavior (distribution of critical points, concentration of the means) of an isotropic random function on a high dimensional manifold require: the variables actually concentrate on their means the weights are drawn from isotropic random function related to: old results TAP solutions (1977) # critical points ~ TAP complexity avoid local minima? : increase Temperature harder problem: low Temp behavior of spin glass
  • 17. c|c (TM) What problem is Deep Learning solving ? (TM) 17 calculation | consulting why deep learning works minimize cross-entropy of output layer entropic effects : not just min energy more like min free energy (divergence) Statistical Physics and InformationTheory: Neri Merhav i.e. variational auto encoders
  • 18. c|c (TM) What problem is Deep Learning solving ? (TM) 18 calculation | consulting why deep learning works Restricted Boltzmann Machine can define free energy directly A Practical Guide toTraining Restricted Boltzmann Machines, Hinton
  • 19. c|c (TM) What problem is Deep Learning solving ? (TM) 19 calculation | consulting why deep learning works Restricted Boltzmann Machine trade off between energy and entropy min free energy directly A Practical Guide toTraining Restricted Boltzmann Machines, Hinton
  • 20. c|c (TM) (TM) 20 calculation | consulting why deep learning works https://web.stanford.edu/~montanar/RESEARCH/BOOK/partB.pdf infinite limit of p-spin spherical glass A related approach: Random Energy Model (REM)
  • 21. c|c (TM) Random Energy Model (REM) (TM) 21 calculation | consulting why deep learning works ground state is governed by ExtremeValue Statistics http://guava.physics.uiuc.edu/~nigel/courses/563/essays2000/pogorelov.pdf http://scitation.aip.org/content/aip/journal/jcp/111/14/10.1063/1.479951 old result from protein folding theory
  • 22. c|c (TM) REM: What is Temperature ? (TM) 22 calculation | consulting why deep learning works We can use statistical mechanics to analyze known algorithms I don’t mean in the traditional sense of algorithmic analysis take Ej as the objective = loss function + regularizer study Z: form a mean field theory; take limits N -> inf, T -> 0
  • 23. c|c (TM) REM: What is Temperature ? (TM) 23 calculation | consulting why deep learning works let E(T) by the effective energy E(T) = E/T ~ sum of weights*activations as T -> 0, E(T) effective energies diverge; weights explode Temperature is a proxy for weight constraints T sets the Energy Scale
  • 24. c|c (TM) Temperature: as Weight Constraints (TM) 24 calculation | consulting why deep learning works •traditional weight regularization •max norm constraints (i.e. w/dropout) •batch norm regularization (2015) we avoid situations when the weights explode in deep networks, we temper the weights and the distribution of the activations (i.e local entropy)
  • 25. c|c (TM) REM: a toy model for real Glasses
 (TM) 25 calculation | consulting why deep learning works but it is believed that entropy collapse ‘drives’ the glass transition the glass transition is not well understood
  • 26. c|c (TM) what is a real (structural) Glass ?
 (TM) 26 calculation | consulting why deep learning works Sand + Fire = Glass
  • 27. c|c (TM) what is a real (structural) Glass ?
 (TM) 27 calculation | consulting why deep learning works all liquids can be made into glasses if we cool then fast enough the glass transition is not a normal phase transition not the melting point arrangement of atoms is amorphous; not completely random different cooling rates produce different glassy states universal phenomena; not universal physics molecular details affect the thermodynamics
  • 28. c|c (TM) REM: the Glass Transition
 (TM) 28 calculation | consulting why deep learning works Entropy collapses when T <~ Tc Phase Diagram: entropy density energy density free energy density https://web.stanford.edu/~montanar/RESEARCH/BOOK/partB.pdf
  • 29. c|c (TM) REM: Dynamics on the Energy Landscape
 (TM) 29 calculation | consulting why deep learning works let us assume some states trap the solver for some time; of course, there is a great effort to design solvers that can avoid traps
  • 30. c|c (TM) Energy Landscapes: and Protein Folding 
 (TM) 30 calculation | consulting why deep learning works let us assume some states trap the solver in state E(j) for a short time and the transitions E(j) -> E(j-1) are governed by finite, reversible transitions (i.e. SGD oscillates back and forth for a while) classic result(s): for T near the glass Temp (Tc) the traversal times are slower than exponential ! in a physical system, like a protein or polymer, it would take longer than the known lifetime of the universe to find the ground (folded) state
  • 31. c|c (TM) Protein Folding: the Levinthal Paradox 
 (TM) 31 calculation | consulting why deep learning works folding could take longer than the known lifetime of the universe ?
  • 32. c|c (TM) (TM) 32 calculation | consulting why deep learning works http://arxiv.org/pdf/cond-mat/9904060v2.pdf Old analogy between Protein folding and Hopfield Associative Memories Natural pattern recognition could • use a mechanism with a glass Temp (Tc) that is as low as possible • avoid the glass transition entirely, via energetics Nature (i.e. folding) can not operate this way ! Protein Folding: around the Levinthal Paradox 

  • 33. c|c (TM) Spin Glasses: Minimizing Frustration
 (TM) 33 calculation | consulting why deep learning works http://www.nature.com/nsmb/journal/v4/n11/pdf/nsb1197-871.pdf
  • 34. c|c (TM) Spin Glasses: Minimizing Frustration
 (TM) 34 calculation | consulting why deep learning works http://www.nature.com/nsmb/journal/v4/n11/pdf/nsb1197-871.pdf
  • 35. c|c (TM) Spin Glasses: vs Disordered FerroMagnets
 (TM) 35 calculation | consulting why deep learning works http://arxiv.org/pdf/cond-mat/9904060v2.pdf
  • 36. c|c (TM) the Spin Glass of Minimal Frustration 
 (TM) 36 calculation | consulting why deep learning works REM + strongly correlated ground state = no glass transition https://arxiv.org/pdf/1312.7283.pdf
  • 37. c|c (TM) the Spin Glass of Minimal Frustration 
 (TM) 37 calculation | consulting why deep learning works Training a model induces an energy gap, with few local minima http://arxiv.org/pdf/1312.0867v1.pdf
  • 38. c|c (TM) Energy Funnels: Entropy vs Energy 
 (TM) 38 calculation | consulting why deep learning works there is a tradeoff between Energy and Entropy minimization
  • 39. c|c (TM) Energy Landscape Theory of Protein Folding (TM) 39 calculation | consulting why deep learning works there is a tradeoff between Energy and Entropy minimization
  • 40. c|c (TM) (TM) 40 calculation | consulting why deep learning works Avoids the glass transition by having more favorable energetics Levinthal paradox glassy surface vanishing gradients Energy Landscape Theory of Protein Folding funneled landscape rugged convexity energy / entropy tradeoff
  • 41. c|c (TM) RBMs: Entropy Energy Tradeoff (TM) 41 calculation | consulting why deep learning works RBM on MNIST Aligned for comparison: entropy - 200 Entropy drops off much faster than total Free Energy
  • 42. c|c (TM) Dark Knowledge: an Energy Funnel ?
 (TM) 42 calculation | consulting why deep learning works 784 -> 800 -> 800 -> 10MLP on MNIST Distilled 10,000 test cases, 10 classes 99 errors same entropy (capacity); better loss function fit to ensemble soft-max probabilities 146 errors 784 -> 800 -> 800 -> 10
  • 43. c|c (TM) Adversarial Deep Nets: an Energy Funnel ?
 (TM) 43 calculation | consulting why deep learning works Discriminator learns a complex loss function Generator: fake data Discriminator: fake vs real ? http://soumith.ch/eyescream/
  • 44. c|c (TM) (TM) 44 calculation | consulting why deep learning works Summary
 Random Energy Model (REM): simpler theoretical model Glass Transition: temperature ~ weight constraints extending REM: Spin Glass of Minimal Frustration possible examples: Dark Knowledge Funneled Energy Landscapes Adversarial Deep Nets