SlideShare a Scribd company logo
1/37
Bayesian Optimization Framework for Urban
Transportation System Simulators
MUMS Transition Workshop at SAMSI
Laura Schultz and Vadim Sokolov
May 14, 2019
2/37
Transportation Simulator
outputs: flows (passenger, transit, vehicles)
x = parameters we are certain about (day of week, special
event attributes, season)
θ = unobserved parameters, e.g. traveler’s preferences value
of time, transit bias, ....
outputs = φ(x, θ)
But do they accurately depict the real world?
3/37
Objective and Challenges
Relationship between field observations y and the simulated values:
yi = φ (xi , θ) + (xi ) + ei
Use Maximum Likelihood Estimate we want to fit predicted flows
φ(θ) to average field traffic conditions ˆy
minimizeθ ||ˆy − φ(θ)||2
2
Challenges
No derivatives
Simulator is stochastic
Each simulator run takes few hours on a fast desktop
θ is high dimensional
4/37
Bayesian Approach
L(θ, y, x) =
1
N
N
i=1
||yi − φ(xi , θ)||2
2 → minimize
1. Put prior on continuous functions C[0, ∞]
2. Repeat for k = 1, 2, . . .
3. Evaluate φ at θk
1 , . . . , θk
nk
4. Compute a posterior (integrate)
5. Decide on the next batch of points to be explored
θk+1
1 , . . . , θk+1
nk+1
Posterior minima is the solution to our optimization problem
5/37
Bayesian Optimization: Gaussian Process
Use Gaussian process surrogate for the loss function
L(θ, x, y) ∼ GP(m(θ), k(θ | x, y, γ) + σ2
δθ)
m(θ) = E[L(θ)], k(θ, θ ) = E[(θ − m(θ))T
(θ − m(θ )]
Typical choice exponentially decaying relation
kSE (θ, θ | γ) = σ2
exp −
1
2
θ − θ
λ
2
γ = (σ2, λ)
6/37
Gaussian Process: Prior
4 3 2 1 0 1 2 3 4
1.0
0.5
0.0
0.5
1.0
4 3 2 1 0 1 2 3 4
2.5
2.0
1.5
1.0
0.5
0.0
0.5
1.0
λ = 10, σ = 1 λ = 5, σ = 1
4 3 2 1 0 1 2 3 4
3
2
1
0
1
2
4 3 2 1 0 1 2 3 4
2
1
0
1
2
3
λ = 2, σ = 1 λ = 1, σ = 1
7/37
Gaussian Process: Prior
4 3 2 1 0 1 2 3 4
12.5
10.0
7.5
5.0
2.5
0.0
2.5
5.0
7.5
4 3 2 1 0 1 2 3 4
3
2
1
0
1
2
λ = 2, σ = 16 λ = 2, σ = 1
8/37
Bayesian Optimization: Gaussian Process
Use Gaussian process surrogate for the loss function
L(θ, x, y) ∼ GP(m(θ), k(θ | x, y, γ) + σ2
δθ)
m(θ) = E[L(θ)], k(θ, θ ) = E[(θ − m(θ))T
(θ − m(θ )]
Typical choice exponentially decaying relation
kSE (θ, θ | γ) = σ2
exp −
1
2
θ − θ
λ
2
γ = (σ2, λ)
9/37
Gaussian Process: Prior
4 3 2 1 0 1 2 3 4
1.0
0.5
0.0
0.5
1.0
4 3 2 1 0 1 2 3 4
2.5
2.0
1.5
1.0
0.5
0.0
0.5
1.0
λ = 10, σ = 1 λ = 5, σ = 1
4 3 2 1 0 1 2 3 4
3
2
1
0
1
2
4 3 2 1 0 1 2 3 4
2
1
0
1
2
3
λ = 2, σ = 1 λ = 1, σ = 1
10/37
Gaussian Process: Prior
4 3 2 1 0 1 2 3 4
12.5
10.0
7.5
5.0
2.5
0.0
2.5
5.0
7.5
4 3 2 1 0 1 2 3 4
3
2
1
0
1
2
λ = 2, σ = 16 λ = 2, σ = 1
11/37
Taxi Example
We use Emukit Playground. It simulates operations of a taxi fleet.
Profit = φ(Number of Taxis) → maximize
12/37
Taxi Example: Evaluate at Extremes
13/37
Taxi Example: Evaluate in the Middle
14/37
Taxi Example: Calculate the Posterior
15/37
Taxi Example: Optimization Algorithm Takes Over
16/37
Taxi Example: Optimization Algorithm
17/37
Taxi Example: Optimization Algorithm
18/37
Taxi Example: Optimization Algorithm
Optimal Number of Taxis is 30
19/37
Bayesian Optimization: Next Point to Evaluate
Bayesian optimal design: find a point that maximizes a utility
function
θ+
= arg max
θ∈D
E[U (θ, L, γ)]
Where L is predictive output (loss) under the GP model at point θ
E[U (θ, L, γ))] =
y
U(θ, L, γ)p(L | θ, γ)dy
For a fully Bayesian approach
E[U (θ, L, γ))] =
γ L
U(θ, L, γ)p(L | θ, γ)p(γ)dLdγ
20/37
Bayesian Optimization: Next Point to Evaluate
E[U (θ, L, γ))] =
L
U(θ, L, γ)p(L | θ, γ)dL
We can account for the size of improvement (Expected
Improvement)
U(θ) = max(0, L∗
− L(θ))
Another recent alternative is upper confidence bound
aUCB(θ; β) = m(θ) − βσ(θ)
Explicit exploitation-exploration trade-off, parametrized by β.
Extra hyper-parameter!
21/37
Complex Transportation Models
The models that need calibration cover cities, not blocks
Detroit has 28,000 road segments with 15 million trips
Each run takes hours to days on a fast desktop
θ is often very high dimensional and sensitive
22/37
Transportation ABM Example: POLARIS
Activity-based demand generation
Static variables θ include probability of activities, distribution
settings for assigning attributes, etc.
23/37
High Dimensional θ
Exploit model structure
Better design (e.g. global + local acquisition function)
Make simulator run faster (e.g. reduced order model)
Parallelize
Reduce dimensionality (Higdon, 2008)
More flexible mean and covariance (Gramacy 2009, Gramacy
2011)
Curse of dimensionality in 20 dimensions with 1 second per run
grid with 10 points per dimension 1020 = 3.171 trillion years
evaluate at 220 corners = 12.14 days
Even checking for optimality is troublesome, there e20 directions
whose pairwise angle is > 60 degrees = 5615 days
24/37
Computational Framework for HPC
HPC Manager:
Submits simulation jobs to compute nodes
Tracks the execution status. Resubmits failed jobs and
postprocesses outputs of successful jobs
Sfit/T based scripts on Linux and vanilla Python on Window
25/37
Batch Optimization
Acquisition function gives 1 point to evaluate
What if we have a parallel machine and can run multiple jobs
at once?
Instead of using k best points we use approach that prefers to
explore
Expected information gain is used instead
26/37
Dimensioanlity Reduction
Basic idea
θ ≈ g(ψ(θ)), ψ : Rn
→ Rm
, m < n
Given Θ = (θ1, . . . , θK ), Y = (y1, . . . , yK )
PCA: ψ(θ) = ΨT θ, ˆΨ = arg minΨ∈Sm
Θ − ΨΨT Θ 2
F ,
Sm = {Ψ ∈ Rn×m | ΨT Ψ = I}
PLS: ˆΨ = arg maxΨ∈Sm
Cov(ΨT Θ, y)
Active Subspace: ψ(θ) = ΨT θ, Ψ are loadings of the gradient
of the simulator φ(θ)
Our approach:
ˆW = arg min
W
θ − g(ψ(θ)) 2 + Y − q(ψ(θ)) 2
g, ψ, q are neural networks parametrized by W , we estimate
those jointly
27/37
Bayesion Optimizaiton in Projected Space
For our framework, 2 objectives exist:
Capture lower-dimensional structure (m < n)
Reconstruct any recommendation to original dimension
µ(x | D) = µ0(x) + mT ψ(x)
σ2
(x | D) = φ(x)T K−1ψ(x) + 1/β,
where
m = βK−1ΦT ˜y
K = βΨT Ψ + Iα.
28/37
Dimensionality Reduction: Active Subspaces
Main idea: Apply PCA to function gradients [Russi 2010]
L(θ)
θ1
θ2
Generalization of approach that picks most sensitive inputs
Replace individual inputs by linear combinations
29/37
Dimensionality Reduction: Active Subspaces
C = ( θL)( θL)T
dθ = W ΛW T
C is a sum of semi-positive definite rank-one matrices. Partition
the eigenvector matrix W = [W1, W2], eigenvectors W1 correspond
to m largest eigenvalues
Project and separate the subspaces
θ = ΨΨT
θ = Ψ1ΨT
1 θ + Ψ2ΨT
2 θ
θa = ΨT
1 θ is active subspace and Ψ1 is the reconstruction operator
( θa L)( θa L)T
dθ = λ1 + . . . + λm
30/37
Dimensionality Reduction: Active Subspaces
Use Monte Carlo to approximate variance of the gradient
Uniformly sample θ1, . . . , θN and compute θLj = θL(θj )
C = ( θL)( θL)T
dθ ≈
1
N
N
j=1
( θLj )( θLj )T
= ˆW ˆΛ ˆW T ˆC
Can use bootstrap to estimate variance of ˆC
Due to Gittens and Trop (2011)
If N = Ω
Mλ1
λ2
ke2
log n , then |λk − ˆλk| ≤ eλk,
dist(W1, ˆW1) ≤
4λ1e
λm − λm+1
31/37
Dimensionality Reduction: Autoencoder
Consider a simple example
f (x) = |uT
x| + 0.1 ∗ sin(100x2)
u is samples uniformly in [-1,1], x ∈ R100, f : R100 → R
1.00 0.75 0.50 0.25 0.00 0.25 0.50 0.75 1.00
0.0
0.2
0.4
0.6
0.8
1.0
1.00 0.75 0.50 0.25 0.00 0.25 0.50 0.75 1.00
0.0
0.2
0.4
0.6
0.8
1.0
x0 vs f (x) FDL reconstruction
FDL(x) = ReLU(wT x) + ReLu(−wT x), ReLu(z) = max(z, 0)
32/37
Dimensionality Reduction: Nonlinear
More flexible projections (non-linearities)
Ability to capture local patterns
Ignoring smaller variational directions no longer an issue
No need for derivatives
33/37
Time Dependency
Traffic trends are time-dependent
Thus far, we implicitly account for it
Our objective function currently averages over space and time
Deep Learner learns and captures in smaller dimension
representation
34/37
Mean Improvement
However, a Deep Learner mean function is an opportunity to
explicitly capture this relationship
GP mean function influences predicted posterior and
acquisition function
GP standard assumption was to set mean to zero
35/37
Results
(a) Dimensional Reduction (b) Mean Function
36/37
References
Gramacy, R. B., & Lee, H. K. (2012). Cases for the nugget in modeling
computer experiments. Statistics and Computing, 22(3), 713-722.
Srinivas, N., Krause, A., Kakade, S. M., & Seeger, M. (2009). Gaussian
process optimization in the bandit setting: No regret and experimental
design. arXiv preprint arXiv:0912.3995.
Snoek, J., Larochelle, H., & Adams, R. P. (2012). Practical bayesian
optimization of machine learning algorithms. In Advances in neural
information processing systems (pp. 2951-2959).
Higdon, D., Gattiker, J., Williams, B., & Rightley, M. (2008). Computer
model calibration using high-dimensional output. Journal of the American
Statistical Association, 103(482), 570-583.
Russi, T. M. (2010). Uncertainty quantification with experimental data
and complex system models (Doctoral dissertation, UC Berkeley)
Gittens, A., & Tropp, J. A. (2011). Tail bounds for all eigenvalues of a
sum of random matrices. arXiv preprint arXiv:1104.4513.
37/37
Discussion
Gaussian process is a practical surrogate for transportation
agent-based models
High dimensionality can be treated using Active Subspace
(requires derivatives) or DL
Deep learning surrogate is less understood but leads to better
results
Next steps:
DL dimensionality reduction
combine GP with DL for surrogate model
opening up the black box (utilize network structure, do not
approximate demand simulator)

More Related Content

What's hot

Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
The Statistical and Applied Mathematical Sciences Institute
 
ABC with Wasserstein distances
ABC with Wasserstein distancesABC with Wasserstein distances
ABC with Wasserstein distances
Christian Robert
 
Computing Information Flow Using Symbolic-Model-Checking_.pdf
Computing Information Flow Using Symbolic-Model-Checking_.pdfComputing Information Flow Using Symbolic-Model-Checking_.pdf
Computing Information Flow Using Symbolic-Model-Checking_.pdf
Polytechnique Montréal
 
comments on exponential ergodicity of the bouncy particle sampler
comments on exponential ergodicity of the bouncy particle samplercomments on exponential ergodicity of the bouncy particle sampler
comments on exponential ergodicity of the bouncy particle sampler
Christian Robert
 
NCE, GANs & VAEs (and maybe BAC)
NCE, GANs & VAEs (and maybe BAC)NCE, GANs & VAEs (and maybe BAC)
NCE, GANs & VAEs (and maybe BAC)
Christian Robert
 
5.3 dynamic programming 03
5.3 dynamic programming 035.3 dynamic programming 03
5.3 dynamic programming 03
Krish_ver2
 
ABC-Gibbs
ABC-GibbsABC-Gibbs
ABC-Gibbs
Christian Robert
 
Numerical integration based on the hyperfunction theory
Numerical integration based on the hyperfunction theoryNumerical integration based on the hyperfunction theory
Numerical integration based on the hyperfunction theory
HidenoriOgata
 
On learning statistical mixtures maximizing the complete likelihood
On learning statistical mixtures maximizing the complete likelihoodOn learning statistical mixtures maximizing the complete likelihood
On learning statistical mixtures maximizing the complete likelihood
Frank Nielsen
 
QMC Opening Workshop, Support Points - a new way to compact distributions, wi...
QMC Opening Workshop, Support Points - a new way to compact distributions, wi...QMC Opening Workshop, Support Points - a new way to compact distributions, wi...
QMC Opening Workshop, Support Points - a new way to compact distributions, wi...
The Statistical and Applied Mathematical Sciences Institute
 
Tailored Bregman Ball Trees for Effective Nearest Neighbors
Tailored Bregman Ball Trees for Effective Nearest NeighborsTailored Bregman Ball Trees for Effective Nearest Neighbors
Tailored Bregman Ball Trees for Effective Nearest Neighbors
Frank Nielsen
 
Andres hernandez ai_machine_learning_london_nov2017
Andres hernandez ai_machine_learning_london_nov2017Andres hernandez ai_machine_learning_london_nov2017
Andres hernandez ai_machine_learning_london_nov2017
Andres Hernandez
 
Design and Implementation of Parallel and Randomized Approximation Algorithms
Design and Implementation of Parallel and Randomized Approximation AlgorithmsDesign and Implementation of Parallel and Randomized Approximation Algorithms
Design and Implementation of Parallel and Randomized Approximation Algorithms
Ajay Bidyarthy
 
Bayesian model choice in cosmology
Bayesian model choice in cosmologyBayesian model choice in cosmology
Bayesian model choice in cosmology
Christian Robert
 
prior selection for mixture estimation
prior selection for mixture estimationprior selection for mixture estimation
prior selection for mixture estimation
Christian Robert
 
k-MLE: A fast algorithm for learning statistical mixture models
k-MLE: A fast algorithm for learning statistical mixture modelsk-MLE: A fast algorithm for learning statistical mixture models
k-MLE: A fast algorithm for learning statistical mixture models
Frank Nielsen
 
Safety Verification of Deep Neural Networks_.pdf
Safety Verification of Deep Neural Networks_.pdfSafety Verification of Deep Neural Networks_.pdf
Safety Verification of Deep Neural Networks_.pdf
Polytechnique Montréal
 
Meta-learning and the ELBO
Meta-learning and the ELBOMeta-learning and the ELBO
Meta-learning and the ELBO
Yoonho Lee
 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
The Statistical and Applied Mathematical Sciences Institute
 

What's hot (20)

Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
 
Analysis of Algorithm
Analysis of AlgorithmAnalysis of Algorithm
Analysis of Algorithm
 
ABC with Wasserstein distances
ABC with Wasserstein distancesABC with Wasserstein distances
ABC with Wasserstein distances
 
Computing Information Flow Using Symbolic-Model-Checking_.pdf
Computing Information Flow Using Symbolic-Model-Checking_.pdfComputing Information Flow Using Symbolic-Model-Checking_.pdf
Computing Information Flow Using Symbolic-Model-Checking_.pdf
 
comments on exponential ergodicity of the bouncy particle sampler
comments on exponential ergodicity of the bouncy particle samplercomments on exponential ergodicity of the bouncy particle sampler
comments on exponential ergodicity of the bouncy particle sampler
 
NCE, GANs & VAEs (and maybe BAC)
NCE, GANs & VAEs (and maybe BAC)NCE, GANs & VAEs (and maybe BAC)
NCE, GANs & VAEs (and maybe BAC)
 
5.3 dynamic programming 03
5.3 dynamic programming 035.3 dynamic programming 03
5.3 dynamic programming 03
 
ABC-Gibbs
ABC-GibbsABC-Gibbs
ABC-Gibbs
 
Numerical integration based on the hyperfunction theory
Numerical integration based on the hyperfunction theoryNumerical integration based on the hyperfunction theory
Numerical integration based on the hyperfunction theory
 
On learning statistical mixtures maximizing the complete likelihood
On learning statistical mixtures maximizing the complete likelihoodOn learning statistical mixtures maximizing the complete likelihood
On learning statistical mixtures maximizing the complete likelihood
 
QMC Opening Workshop, Support Points - a new way to compact distributions, wi...
QMC Opening Workshop, Support Points - a new way to compact distributions, wi...QMC Opening Workshop, Support Points - a new way to compact distributions, wi...
QMC Opening Workshop, Support Points - a new way to compact distributions, wi...
 
Tailored Bregman Ball Trees for Effective Nearest Neighbors
Tailored Bregman Ball Trees for Effective Nearest NeighborsTailored Bregman Ball Trees for Effective Nearest Neighbors
Tailored Bregman Ball Trees for Effective Nearest Neighbors
 
Andres hernandez ai_machine_learning_london_nov2017
Andres hernandez ai_machine_learning_london_nov2017Andres hernandez ai_machine_learning_london_nov2017
Andres hernandez ai_machine_learning_london_nov2017
 
Design and Implementation of Parallel and Randomized Approximation Algorithms
Design and Implementation of Parallel and Randomized Approximation AlgorithmsDesign and Implementation of Parallel and Randomized Approximation Algorithms
Design and Implementation of Parallel and Randomized Approximation Algorithms
 
Bayesian model choice in cosmology
Bayesian model choice in cosmologyBayesian model choice in cosmology
Bayesian model choice in cosmology
 
prior selection for mixture estimation
prior selection for mixture estimationprior selection for mixture estimation
prior selection for mixture estimation
 
k-MLE: A fast algorithm for learning statistical mixture models
k-MLE: A fast algorithm for learning statistical mixture modelsk-MLE: A fast algorithm for learning statistical mixture models
k-MLE: A fast algorithm for learning statistical mixture models
 
Safety Verification of Deep Neural Networks_.pdf
Safety Verification of Deep Neural Networks_.pdfSafety Verification of Deep Neural Networks_.pdf
Safety Verification of Deep Neural Networks_.pdf
 
Meta-learning and the ELBO
Meta-learning and the ELBOMeta-learning and the ELBO
Meta-learning and the ELBO
 
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
 

Similar to MUMS: Transition & SPUQ Workshop - Practical Bayesian Optimization for Urban Transportation System Simulators - Laura Schultz and Vadim Sokolov, May 14, 2019

[AAAI2021] Combinatorial Pure Exploration with Full-bandit or Partial Linear ...
[AAAI2021] Combinatorial Pure Exploration with Full-bandit or Partial Linear ...[AAAI2021] Combinatorial Pure Exploration with Full-bandit or Partial Linear ...
[AAAI2021] Combinatorial Pure Exploration with Full-bandit or Partial Linear ...
Yuko Kuroki (黒木祐子)
 
Unbiased Bayes for Big Data
Unbiased Bayes for Big DataUnbiased Bayes for Big Data
Unbiased Bayes for Big Data
Christian Robert
 
SIAM - Minisymposium on Guaranteed numerical algorithms
SIAM - Minisymposium on Guaranteed numerical algorithmsSIAM - Minisymposium on Guaranteed numerical algorithms
SIAM - Minisymposium on Guaranteed numerical algorithms
Jagadeeswaran Rathinavel
 
MAPE regression, seminar @ QUT (Brisbane)
MAPE regression, seminar @ QUT (Brisbane)MAPE regression, seminar @ QUT (Brisbane)
MAPE regression, seminar @ QUT (Brisbane)
Arnaud de Myttenaere
 
A Parallel Branch And Bound Algorithm For The Quadratic Assignment Problem
A Parallel Branch And Bound Algorithm For The Quadratic Assignment ProblemA Parallel Branch And Bound Algorithm For The Quadratic Assignment Problem
A Parallel Branch And Bound Algorithm For The Quadratic Assignment Problem
Mary Calkins
 
Presentation.pdf
Presentation.pdfPresentation.pdf
Presentation.pdf
Chiheb Ben Hammouda
 
MUMS: Agent-based Modeling Workshop - Practical Bayesian Optimization for Age...
MUMS: Agent-based Modeling Workshop - Practical Bayesian Optimization for Age...MUMS: Agent-based Modeling Workshop - Practical Bayesian Optimization for Age...
MUMS: Agent-based Modeling Workshop - Practical Bayesian Optimization for Age...
The Statistical and Applied Mathematical Sciences Institute
 
MUMS Opening Workshop - An Overview of Reduced-Order Models and Emulators (ED...
MUMS Opening Workshop - An Overview of Reduced-Order Models and Emulators (ED...MUMS Opening Workshop - An Overview of Reduced-Order Models and Emulators (ED...
MUMS Opening Workshop - An Overview of Reduced-Order Models and Emulators (ED...
The Statistical and Applied Mathematical Sciences Institute
 
Hierarchical Deterministic Quadrature Methods for Option Pricing under the Ro...
Hierarchical Deterministic Quadrature Methods for Option Pricing under the Ro...Hierarchical Deterministic Quadrature Methods for Option Pricing under the Ro...
Hierarchical Deterministic Quadrature Methods for Option Pricing under the Ro...
Chiheb Ben Hammouda
 
Automatic bayesian cubature
Automatic bayesian cubatureAutomatic bayesian cubature
Automatic bayesian cubature
Jagadeeswaran Rathinavel
 
MUMS: Bayesian, Fiducial, and Frequentist Conference - Model Selection in the...
MUMS: Bayesian, Fiducial, and Frequentist Conference - Model Selection in the...MUMS: Bayesian, Fiducial, and Frequentist Conference - Model Selection in the...
MUMS: Bayesian, Fiducial, and Frequentist Conference - Model Selection in the...
The Statistical and Applied Mathematical Sciences Institute
 
Alpine Spark Implementation - Technical
Alpine Spark Implementation - TechnicalAlpine Spark Implementation - Technical
Alpine Spark Implementation - Technical
alpinedatalabs
 
Multinomial Logistic Regression with Apache Spark
Multinomial Logistic Regression with Apache SparkMultinomial Logistic Regression with Apache Spark
Multinomial Logistic Regression with Apache Spark
DB Tsai
 
Introduction to Artificial Neural Networks
Introduction to Artificial Neural NetworksIntroduction to Artificial Neural Networks
Introduction to Artificial Neural Networks
Stratio
 
Distributed solution of stochastic optimal control problem on GPUs
Distributed solution of stochastic optimal control problem on GPUsDistributed solution of stochastic optimal control problem on GPUs
Distributed solution of stochastic optimal control problem on GPUs
Pantelis Sopasakis
 
QMC: Operator Splitting Workshop, Incremental Learning-to-Learn with Statisti...
QMC: Operator Splitting Workshop, Incremental Learning-to-Learn with Statisti...QMC: Operator Splitting Workshop, Incremental Learning-to-Learn with Statisti...
QMC: Operator Splitting Workshop, Incremental Learning-to-Learn with Statisti...
The Statistical and Applied Mathematical Sciences Institute
 
UNIT-II.pptx
UNIT-II.pptxUNIT-II.pptx
UNIT-II.pptx
JyoReddy9
 
Tensor train to solve stochastic PDEs
Tensor train to solve stochastic PDEsTensor train to solve stochastic PDEs
Tensor train to solve stochastic PDEs
Alexander Litvinenko
 
Session 4 .pdf
Session 4 .pdfSession 4 .pdf
Session 4 .pdf
ssuser8cda84
 
Q-Metrics in Theory and Practice
Q-Metrics in Theory and PracticeQ-Metrics in Theory and Practice
Q-Metrics in Theory and Practice
Magdi Mohamed
 

Similar to MUMS: Transition & SPUQ Workshop - Practical Bayesian Optimization for Urban Transportation System Simulators - Laura Schultz and Vadim Sokolov, May 14, 2019 (20)

[AAAI2021] Combinatorial Pure Exploration with Full-bandit or Partial Linear ...
[AAAI2021] Combinatorial Pure Exploration with Full-bandit or Partial Linear ...[AAAI2021] Combinatorial Pure Exploration with Full-bandit or Partial Linear ...
[AAAI2021] Combinatorial Pure Exploration with Full-bandit or Partial Linear ...
 
Unbiased Bayes for Big Data
Unbiased Bayes for Big DataUnbiased Bayes for Big Data
Unbiased Bayes for Big Data
 
SIAM - Minisymposium on Guaranteed numerical algorithms
SIAM - Minisymposium on Guaranteed numerical algorithmsSIAM - Minisymposium on Guaranteed numerical algorithms
SIAM - Minisymposium on Guaranteed numerical algorithms
 
MAPE regression, seminar @ QUT (Brisbane)
MAPE regression, seminar @ QUT (Brisbane)MAPE regression, seminar @ QUT (Brisbane)
MAPE regression, seminar @ QUT (Brisbane)
 
A Parallel Branch And Bound Algorithm For The Quadratic Assignment Problem
A Parallel Branch And Bound Algorithm For The Quadratic Assignment ProblemA Parallel Branch And Bound Algorithm For The Quadratic Assignment Problem
A Parallel Branch And Bound Algorithm For The Quadratic Assignment Problem
 
Presentation.pdf
Presentation.pdfPresentation.pdf
Presentation.pdf
 
MUMS: Agent-based Modeling Workshop - Practical Bayesian Optimization for Age...
MUMS: Agent-based Modeling Workshop - Practical Bayesian Optimization for Age...MUMS: Agent-based Modeling Workshop - Practical Bayesian Optimization for Age...
MUMS: Agent-based Modeling Workshop - Practical Bayesian Optimization for Age...
 
MUMS Opening Workshop - An Overview of Reduced-Order Models and Emulators (ED...
MUMS Opening Workshop - An Overview of Reduced-Order Models and Emulators (ED...MUMS Opening Workshop - An Overview of Reduced-Order Models and Emulators (ED...
MUMS Opening Workshop - An Overview of Reduced-Order Models and Emulators (ED...
 
Hierarchical Deterministic Quadrature Methods for Option Pricing under the Ro...
Hierarchical Deterministic Quadrature Methods for Option Pricing under the Ro...Hierarchical Deterministic Quadrature Methods for Option Pricing under the Ro...
Hierarchical Deterministic Quadrature Methods for Option Pricing under the Ro...
 
Automatic bayesian cubature
Automatic bayesian cubatureAutomatic bayesian cubature
Automatic bayesian cubature
 
MUMS: Bayesian, Fiducial, and Frequentist Conference - Model Selection in the...
MUMS: Bayesian, Fiducial, and Frequentist Conference - Model Selection in the...MUMS: Bayesian, Fiducial, and Frequentist Conference - Model Selection in the...
MUMS: Bayesian, Fiducial, and Frequentist Conference - Model Selection in the...
 
Alpine Spark Implementation - Technical
Alpine Spark Implementation - TechnicalAlpine Spark Implementation - Technical
Alpine Spark Implementation - Technical
 
Multinomial Logistic Regression with Apache Spark
Multinomial Logistic Regression with Apache SparkMultinomial Logistic Regression with Apache Spark
Multinomial Logistic Regression with Apache Spark
 
Introduction to Artificial Neural Networks
Introduction to Artificial Neural NetworksIntroduction to Artificial Neural Networks
Introduction to Artificial Neural Networks
 
Distributed solution of stochastic optimal control problem on GPUs
Distributed solution of stochastic optimal control problem on GPUsDistributed solution of stochastic optimal control problem on GPUs
Distributed solution of stochastic optimal control problem on GPUs
 
QMC: Operator Splitting Workshop, Incremental Learning-to-Learn with Statisti...
QMC: Operator Splitting Workshop, Incremental Learning-to-Learn with Statisti...QMC: Operator Splitting Workshop, Incremental Learning-to-Learn with Statisti...
QMC: Operator Splitting Workshop, Incremental Learning-to-Learn with Statisti...
 
UNIT-II.pptx
UNIT-II.pptxUNIT-II.pptx
UNIT-II.pptx
 
Tensor train to solve stochastic PDEs
Tensor train to solve stochastic PDEsTensor train to solve stochastic PDEs
Tensor train to solve stochastic PDEs
 
Session 4 .pdf
Session 4 .pdfSession 4 .pdf
Session 4 .pdf
 
Q-Metrics in Theory and Practice
Q-Metrics in Theory and PracticeQ-Metrics in Theory and Practice
Q-Metrics in Theory and Practice
 

More from The Statistical and Applied Mathematical Sciences Institute

Causal Inference Opening Workshop - Latent Variable Models, Causal Inference,...
Causal Inference Opening Workshop - Latent Variable Models, Causal Inference,...Causal Inference Opening Workshop - Latent Variable Models, Causal Inference,...
Causal Inference Opening Workshop - Latent Variable Models, Causal Inference,...
The Statistical and Applied Mathematical Sciences Institute
 
2019 Fall Series: Special Guest Lecture - 0-1 Phase Transitions in High Dimen...
2019 Fall Series: Special Guest Lecture - 0-1 Phase Transitions in High Dimen...2019 Fall Series: Special Guest Lecture - 0-1 Phase Transitions in High Dimen...
2019 Fall Series: Special Guest Lecture - 0-1 Phase Transitions in High Dimen...
The Statistical and Applied Mathematical Sciences Institute
 
Causal Inference Opening Workshop - Causal Discovery in Neuroimaging Data - F...
Causal Inference Opening Workshop - Causal Discovery in Neuroimaging Data - F...Causal Inference Opening Workshop - Causal Discovery in Neuroimaging Data - F...
Causal Inference Opening Workshop - Causal Discovery in Neuroimaging Data - F...
The Statistical and Applied Mathematical Sciences Institute
 
Causal Inference Opening Workshop - Smooth Extensions to BART for Heterogeneo...
Causal Inference Opening Workshop - Smooth Extensions to BART for Heterogeneo...Causal Inference Opening Workshop - Smooth Extensions to BART for Heterogeneo...
Causal Inference Opening Workshop - Smooth Extensions to BART for Heterogeneo...
The Statistical and Applied Mathematical Sciences Institute
 
Causal Inference Opening Workshop - A Bracketing Relationship between Differe...
Causal Inference Opening Workshop - A Bracketing Relationship between Differe...Causal Inference Opening Workshop - A Bracketing Relationship between Differe...
Causal Inference Opening Workshop - A Bracketing Relationship between Differe...
The Statistical and Applied Mathematical Sciences Institute
 
Causal Inference Opening Workshop - Testing Weak Nulls in Matched Observation...
Causal Inference Opening Workshop - Testing Weak Nulls in Matched Observation...Causal Inference Opening Workshop - Testing Weak Nulls in Matched Observation...
Causal Inference Opening Workshop - Testing Weak Nulls in Matched Observation...
The Statistical and Applied Mathematical Sciences Institute
 
Causal Inference Opening Workshop - Difference-in-differences: more than meet...
Causal Inference Opening Workshop - Difference-in-differences: more than meet...Causal Inference Opening Workshop - Difference-in-differences: more than meet...
Causal Inference Opening Workshop - Difference-in-differences: more than meet...
The Statistical and Applied Mathematical Sciences Institute
 
Causal Inference Opening Workshop - New Statistical Learning Methods for Esti...
Causal Inference Opening Workshop - New Statistical Learning Methods for Esti...Causal Inference Opening Workshop - New Statistical Learning Methods for Esti...
Causal Inference Opening Workshop - New Statistical Learning Methods for Esti...
The Statistical and Applied Mathematical Sciences Institute
 
Causal Inference Opening Workshop - Bipartite Causal Inference with Interfere...
Causal Inference Opening Workshop - Bipartite Causal Inference with Interfere...Causal Inference Opening Workshop - Bipartite Causal Inference with Interfere...
Causal Inference Opening Workshop - Bipartite Causal Inference with Interfere...
The Statistical and Applied Mathematical Sciences Institute
 
Causal Inference Opening Workshop - Bridging the Gap Between Causal Literatur...
Causal Inference Opening Workshop - Bridging the Gap Between Causal Literatur...Causal Inference Opening Workshop - Bridging the Gap Between Causal Literatur...
Causal Inference Opening Workshop - Bridging the Gap Between Causal Literatur...
The Statistical and Applied Mathematical Sciences Institute
 
Causal Inference Opening Workshop - Some Applications of Reinforcement Learni...
Causal Inference Opening Workshop - Some Applications of Reinforcement Learni...Causal Inference Opening Workshop - Some Applications of Reinforcement Learni...
Causal Inference Opening Workshop - Some Applications of Reinforcement Learni...
The Statistical and Applied Mathematical Sciences Institute
 
Causal Inference Opening Workshop - Bracketing Bounds for Differences-in-Diff...
Causal Inference Opening Workshop - Bracketing Bounds for Differences-in-Diff...Causal Inference Opening Workshop - Bracketing Bounds for Differences-in-Diff...
Causal Inference Opening Workshop - Bracketing Bounds for Differences-in-Diff...
The Statistical and Applied Mathematical Sciences Institute
 
Causal Inference Opening Workshop - Assisting the Impact of State Polcies: Br...
Causal Inference Opening Workshop - Assisting the Impact of State Polcies: Br...Causal Inference Opening Workshop - Assisting the Impact of State Polcies: Br...
Causal Inference Opening Workshop - Assisting the Impact of State Polcies: Br...
The Statistical and Applied Mathematical Sciences Institute
 
Causal Inference Opening Workshop - Experimenting in Equilibrium - Stefan Wag...
Causal Inference Opening Workshop - Experimenting in Equilibrium - Stefan Wag...Causal Inference Opening Workshop - Experimenting in Equilibrium - Stefan Wag...
Causal Inference Opening Workshop - Experimenting in Equilibrium - Stefan Wag...
The Statistical and Applied Mathematical Sciences Institute
 
Causal Inference Opening Workshop - Targeted Learning for Causal Inference Ba...
Causal Inference Opening Workshop - Targeted Learning for Causal Inference Ba...Causal Inference Opening Workshop - Targeted Learning for Causal Inference Ba...
Causal Inference Opening Workshop - Targeted Learning for Causal Inference Ba...
The Statistical and Applied Mathematical Sciences Institute
 
Causal Inference Opening Workshop - Bayesian Nonparametric Models for Treatme...
Causal Inference Opening Workshop - Bayesian Nonparametric Models for Treatme...Causal Inference Opening Workshop - Bayesian Nonparametric Models for Treatme...
Causal Inference Opening Workshop - Bayesian Nonparametric Models for Treatme...
The Statistical and Applied Mathematical Sciences Institute
 
2019 Fall Series: Special Guest Lecture - Adversarial Risk Analysis of the Ge...
2019 Fall Series: Special Guest Lecture - Adversarial Risk Analysis of the Ge...2019 Fall Series: Special Guest Lecture - Adversarial Risk Analysis of the Ge...
2019 Fall Series: Special Guest Lecture - Adversarial Risk Analysis of the Ge...
The Statistical and Applied Mathematical Sciences Institute
 
2019 Fall Series: Professional Development, Writing Academic Papers…What Work...
2019 Fall Series: Professional Development, Writing Academic Papers…What Work...2019 Fall Series: Professional Development, Writing Academic Papers…What Work...
2019 Fall Series: Professional Development, Writing Academic Papers…What Work...
The Statistical and Applied Mathematical Sciences Institute
 
2019 GDRR: Blockchain Data Analytics - Machine Learning in/for Blockchain: Fu...
2019 GDRR: Blockchain Data Analytics - Machine Learning in/for Blockchain: Fu...2019 GDRR: Blockchain Data Analytics - Machine Learning in/for Blockchain: Fu...
2019 GDRR: Blockchain Data Analytics - Machine Learning in/for Blockchain: Fu...
The Statistical and Applied Mathematical Sciences Institute
 
2019 GDRR: Blockchain Data Analytics - QuTrack: Model Life Cycle Management f...
2019 GDRR: Blockchain Data Analytics - QuTrack: Model Life Cycle Management f...2019 GDRR: Blockchain Data Analytics - QuTrack: Model Life Cycle Management f...
2019 GDRR: Blockchain Data Analytics - QuTrack: Model Life Cycle Management f...
The Statistical and Applied Mathematical Sciences Institute
 

More from The Statistical and Applied Mathematical Sciences Institute (20)

Causal Inference Opening Workshop - Latent Variable Models, Causal Inference,...
Causal Inference Opening Workshop - Latent Variable Models, Causal Inference,...Causal Inference Opening Workshop - Latent Variable Models, Causal Inference,...
Causal Inference Opening Workshop - Latent Variable Models, Causal Inference,...
 
2019 Fall Series: Special Guest Lecture - 0-1 Phase Transitions in High Dimen...
2019 Fall Series: Special Guest Lecture - 0-1 Phase Transitions in High Dimen...2019 Fall Series: Special Guest Lecture - 0-1 Phase Transitions in High Dimen...
2019 Fall Series: Special Guest Lecture - 0-1 Phase Transitions in High Dimen...
 
Causal Inference Opening Workshop - Causal Discovery in Neuroimaging Data - F...
Causal Inference Opening Workshop - Causal Discovery in Neuroimaging Data - F...Causal Inference Opening Workshop - Causal Discovery in Neuroimaging Data - F...
Causal Inference Opening Workshop - Causal Discovery in Neuroimaging Data - F...
 
Causal Inference Opening Workshop - Smooth Extensions to BART for Heterogeneo...
Causal Inference Opening Workshop - Smooth Extensions to BART for Heterogeneo...Causal Inference Opening Workshop - Smooth Extensions to BART for Heterogeneo...
Causal Inference Opening Workshop - Smooth Extensions to BART for Heterogeneo...
 
Causal Inference Opening Workshop - A Bracketing Relationship between Differe...
Causal Inference Opening Workshop - A Bracketing Relationship between Differe...Causal Inference Opening Workshop - A Bracketing Relationship between Differe...
Causal Inference Opening Workshop - A Bracketing Relationship between Differe...
 
Causal Inference Opening Workshop - Testing Weak Nulls in Matched Observation...
Causal Inference Opening Workshop - Testing Weak Nulls in Matched Observation...Causal Inference Opening Workshop - Testing Weak Nulls in Matched Observation...
Causal Inference Opening Workshop - Testing Weak Nulls in Matched Observation...
 
Causal Inference Opening Workshop - Difference-in-differences: more than meet...
Causal Inference Opening Workshop - Difference-in-differences: more than meet...Causal Inference Opening Workshop - Difference-in-differences: more than meet...
Causal Inference Opening Workshop - Difference-in-differences: more than meet...
 
Causal Inference Opening Workshop - New Statistical Learning Methods for Esti...
Causal Inference Opening Workshop - New Statistical Learning Methods for Esti...Causal Inference Opening Workshop - New Statistical Learning Methods for Esti...
Causal Inference Opening Workshop - New Statistical Learning Methods for Esti...
 
Causal Inference Opening Workshop - Bipartite Causal Inference with Interfere...
Causal Inference Opening Workshop - Bipartite Causal Inference with Interfere...Causal Inference Opening Workshop - Bipartite Causal Inference with Interfere...
Causal Inference Opening Workshop - Bipartite Causal Inference with Interfere...
 
Causal Inference Opening Workshop - Bridging the Gap Between Causal Literatur...
Causal Inference Opening Workshop - Bridging the Gap Between Causal Literatur...Causal Inference Opening Workshop - Bridging the Gap Between Causal Literatur...
Causal Inference Opening Workshop - Bridging the Gap Between Causal Literatur...
 
Causal Inference Opening Workshop - Some Applications of Reinforcement Learni...
Causal Inference Opening Workshop - Some Applications of Reinforcement Learni...Causal Inference Opening Workshop - Some Applications of Reinforcement Learni...
Causal Inference Opening Workshop - Some Applications of Reinforcement Learni...
 
Causal Inference Opening Workshop - Bracketing Bounds for Differences-in-Diff...
Causal Inference Opening Workshop - Bracketing Bounds for Differences-in-Diff...Causal Inference Opening Workshop - Bracketing Bounds for Differences-in-Diff...
Causal Inference Opening Workshop - Bracketing Bounds for Differences-in-Diff...
 
Causal Inference Opening Workshop - Assisting the Impact of State Polcies: Br...
Causal Inference Opening Workshop - Assisting the Impact of State Polcies: Br...Causal Inference Opening Workshop - Assisting the Impact of State Polcies: Br...
Causal Inference Opening Workshop - Assisting the Impact of State Polcies: Br...
 
Causal Inference Opening Workshop - Experimenting in Equilibrium - Stefan Wag...
Causal Inference Opening Workshop - Experimenting in Equilibrium - Stefan Wag...Causal Inference Opening Workshop - Experimenting in Equilibrium - Stefan Wag...
Causal Inference Opening Workshop - Experimenting in Equilibrium - Stefan Wag...
 
Causal Inference Opening Workshop - Targeted Learning for Causal Inference Ba...
Causal Inference Opening Workshop - Targeted Learning for Causal Inference Ba...Causal Inference Opening Workshop - Targeted Learning for Causal Inference Ba...
Causal Inference Opening Workshop - Targeted Learning for Causal Inference Ba...
 
Causal Inference Opening Workshop - Bayesian Nonparametric Models for Treatme...
Causal Inference Opening Workshop - Bayesian Nonparametric Models for Treatme...Causal Inference Opening Workshop - Bayesian Nonparametric Models for Treatme...
Causal Inference Opening Workshop - Bayesian Nonparametric Models for Treatme...
 
2019 Fall Series: Special Guest Lecture - Adversarial Risk Analysis of the Ge...
2019 Fall Series: Special Guest Lecture - Adversarial Risk Analysis of the Ge...2019 Fall Series: Special Guest Lecture - Adversarial Risk Analysis of the Ge...
2019 Fall Series: Special Guest Lecture - Adversarial Risk Analysis of the Ge...
 
2019 Fall Series: Professional Development, Writing Academic Papers…What Work...
2019 Fall Series: Professional Development, Writing Academic Papers…What Work...2019 Fall Series: Professional Development, Writing Academic Papers…What Work...
2019 Fall Series: Professional Development, Writing Academic Papers…What Work...
 
2019 GDRR: Blockchain Data Analytics - Machine Learning in/for Blockchain: Fu...
2019 GDRR: Blockchain Data Analytics - Machine Learning in/for Blockchain: Fu...2019 GDRR: Blockchain Data Analytics - Machine Learning in/for Blockchain: Fu...
2019 GDRR: Blockchain Data Analytics - Machine Learning in/for Blockchain: Fu...
 
2019 GDRR: Blockchain Data Analytics - QuTrack: Model Life Cycle Management f...
2019 GDRR: Blockchain Data Analytics - QuTrack: Model Life Cycle Management f...2019 GDRR: Blockchain Data Analytics - QuTrack: Model Life Cycle Management f...
2019 GDRR: Blockchain Data Analytics - QuTrack: Model Life Cycle Management f...
 

Recently uploaded

Honest Reviews of Tim Han LMA Course Program.pptx
Honest Reviews of Tim Han LMA Course Program.pptxHonest Reviews of Tim Han LMA Course Program.pptx
Honest Reviews of Tim Han LMA Course Program.pptx
timhan337
 
2024.06.01 Introducing a competency framework for languag learning materials ...
2024.06.01 Introducing a competency framework for languag learning materials ...2024.06.01 Introducing a competency framework for languag learning materials ...
2024.06.01 Introducing a competency framework for languag learning materials ...
Sandy Millin
 
Embracing GenAI - A Strategic Imperative
Embracing GenAI - A Strategic ImperativeEmbracing GenAI - A Strategic Imperative
Embracing GenAI - A Strategic Imperative
Peter Windle
 
Unit 2- Research Aptitude (UGC NET Paper I).pdf
Unit 2- Research Aptitude (UGC NET Paper I).pdfUnit 2- Research Aptitude (UGC NET Paper I).pdf
Unit 2- Research Aptitude (UGC NET Paper I).pdf
Thiyagu K
 
The Challenger.pdf DNHS Official Publication
The Challenger.pdf DNHS Official PublicationThe Challenger.pdf DNHS Official Publication
The Challenger.pdf DNHS Official Publication
Delapenabediema
 
The basics of sentences session 5pptx.pptx
The basics of sentences session 5pptx.pptxThe basics of sentences session 5pptx.pptx
The basics of sentences session 5pptx.pptx
heathfieldcps1
 
Operation Blue Star - Saka Neela Tara
Operation Blue Star   -  Saka Neela TaraOperation Blue Star   -  Saka Neela Tara
Operation Blue Star - Saka Neela Tara
Balvir Singh
 
Thesis Statement for students diagnonsed withADHD.ppt
Thesis Statement for students diagnonsed withADHD.pptThesis Statement for students diagnonsed withADHD.ppt
Thesis Statement for students diagnonsed withADHD.ppt
EverAndrsGuerraGuerr
 
Acetabularia Information For Class 9 .docx
Acetabularia Information For Class 9  .docxAcetabularia Information For Class 9  .docx
Acetabularia Information For Class 9 .docx
vaibhavrinwa19
 
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
MysoreMuleSoftMeetup
 
"Protectable subject matters, Protection in biotechnology, Protection of othe...
"Protectable subject matters, Protection in biotechnology, Protection of othe..."Protectable subject matters, Protection in biotechnology, Protection of othe...
"Protectable subject matters, Protection in biotechnology, Protection of othe...
SACHIN R KONDAGURI
 
Language Across the Curriculm LAC B.Ed.
Language Across the  Curriculm LAC B.Ed.Language Across the  Curriculm LAC B.Ed.
Language Across the Curriculm LAC B.Ed.
Atul Kumar Singh
 
Group Presentation 2 Economics.Ariana Buscigliopptx
Group Presentation 2 Economics.Ariana BuscigliopptxGroup Presentation 2 Economics.Ariana Buscigliopptx
Group Presentation 2 Economics.Ariana Buscigliopptx
ArianaBusciglio
 
Synthetic Fiber Construction in lab .pptx
Synthetic Fiber Construction in lab .pptxSynthetic Fiber Construction in lab .pptx
Synthetic Fiber Construction in lab .pptx
Pavel ( NSTU)
 
Digital Artifact 2 - Investigating Pavilion Designs
Digital Artifact 2 - Investigating Pavilion DesignsDigital Artifact 2 - Investigating Pavilion Designs
Digital Artifact 2 - Investigating Pavilion Designs
chanes7
 
Normal Labour/ Stages of Labour/ Mechanism of Labour
Normal Labour/ Stages of Labour/ Mechanism of LabourNormal Labour/ Stages of Labour/ Mechanism of Labour
Normal Labour/ Stages of Labour/ Mechanism of Labour
Wasim Ak
 
How to Make a Field invisible in Odoo 17
How to Make a Field invisible in Odoo 17How to Make a Field invisible in Odoo 17
How to Make a Field invisible in Odoo 17
Celine George
 
A Strategic Approach: GenAI in Education
A Strategic Approach: GenAI in EducationA Strategic Approach: GenAI in Education
A Strategic Approach: GenAI in Education
Peter Windle
 
Francesca Gottschalk - How can education support child empowerment.pptx
Francesca Gottschalk - How can education support child empowerment.pptxFrancesca Gottschalk - How can education support child empowerment.pptx
Francesca Gottschalk - How can education support child empowerment.pptx
EduSkills OECD
 
Home assignment II on Spectroscopy 2024 Answers.pdf
Home assignment II on Spectroscopy 2024 Answers.pdfHome assignment II on Spectroscopy 2024 Answers.pdf
Home assignment II on Spectroscopy 2024 Answers.pdf
Tamralipta Mahavidyalaya
 

Recently uploaded (20)

Honest Reviews of Tim Han LMA Course Program.pptx
Honest Reviews of Tim Han LMA Course Program.pptxHonest Reviews of Tim Han LMA Course Program.pptx
Honest Reviews of Tim Han LMA Course Program.pptx
 
2024.06.01 Introducing a competency framework for languag learning materials ...
2024.06.01 Introducing a competency framework for languag learning materials ...2024.06.01 Introducing a competency framework for languag learning materials ...
2024.06.01 Introducing a competency framework for languag learning materials ...
 
Embracing GenAI - A Strategic Imperative
Embracing GenAI - A Strategic ImperativeEmbracing GenAI - A Strategic Imperative
Embracing GenAI - A Strategic Imperative
 
Unit 2- Research Aptitude (UGC NET Paper I).pdf
Unit 2- Research Aptitude (UGC NET Paper I).pdfUnit 2- Research Aptitude (UGC NET Paper I).pdf
Unit 2- Research Aptitude (UGC NET Paper I).pdf
 
The Challenger.pdf DNHS Official Publication
The Challenger.pdf DNHS Official PublicationThe Challenger.pdf DNHS Official Publication
The Challenger.pdf DNHS Official Publication
 
The basics of sentences session 5pptx.pptx
The basics of sentences session 5pptx.pptxThe basics of sentences session 5pptx.pptx
The basics of sentences session 5pptx.pptx
 
Operation Blue Star - Saka Neela Tara
Operation Blue Star   -  Saka Neela TaraOperation Blue Star   -  Saka Neela Tara
Operation Blue Star - Saka Neela Tara
 
Thesis Statement for students diagnonsed withADHD.ppt
Thesis Statement for students diagnonsed withADHD.pptThesis Statement for students diagnonsed withADHD.ppt
Thesis Statement for students diagnonsed withADHD.ppt
 
Acetabularia Information For Class 9 .docx
Acetabularia Information For Class 9  .docxAcetabularia Information For Class 9  .docx
Acetabularia Information For Class 9 .docx
 
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
Mule 4.6 & Java 17 Upgrade | MuleSoft Mysore Meetup #46
 
"Protectable subject matters, Protection in biotechnology, Protection of othe...
"Protectable subject matters, Protection in biotechnology, Protection of othe..."Protectable subject matters, Protection in biotechnology, Protection of othe...
"Protectable subject matters, Protection in biotechnology, Protection of othe...
 
Language Across the Curriculm LAC B.Ed.
Language Across the  Curriculm LAC B.Ed.Language Across the  Curriculm LAC B.Ed.
Language Across the Curriculm LAC B.Ed.
 
Group Presentation 2 Economics.Ariana Buscigliopptx
Group Presentation 2 Economics.Ariana BuscigliopptxGroup Presentation 2 Economics.Ariana Buscigliopptx
Group Presentation 2 Economics.Ariana Buscigliopptx
 
Synthetic Fiber Construction in lab .pptx
Synthetic Fiber Construction in lab .pptxSynthetic Fiber Construction in lab .pptx
Synthetic Fiber Construction in lab .pptx
 
Digital Artifact 2 - Investigating Pavilion Designs
Digital Artifact 2 - Investigating Pavilion DesignsDigital Artifact 2 - Investigating Pavilion Designs
Digital Artifact 2 - Investigating Pavilion Designs
 
Normal Labour/ Stages of Labour/ Mechanism of Labour
Normal Labour/ Stages of Labour/ Mechanism of LabourNormal Labour/ Stages of Labour/ Mechanism of Labour
Normal Labour/ Stages of Labour/ Mechanism of Labour
 
How to Make a Field invisible in Odoo 17
How to Make a Field invisible in Odoo 17How to Make a Field invisible in Odoo 17
How to Make a Field invisible in Odoo 17
 
A Strategic Approach: GenAI in Education
A Strategic Approach: GenAI in EducationA Strategic Approach: GenAI in Education
A Strategic Approach: GenAI in Education
 
Francesca Gottschalk - How can education support child empowerment.pptx
Francesca Gottschalk - How can education support child empowerment.pptxFrancesca Gottschalk - How can education support child empowerment.pptx
Francesca Gottschalk - How can education support child empowerment.pptx
 
Home assignment II on Spectroscopy 2024 Answers.pdf
Home assignment II on Spectroscopy 2024 Answers.pdfHome assignment II on Spectroscopy 2024 Answers.pdf
Home assignment II on Spectroscopy 2024 Answers.pdf
 

MUMS: Transition & SPUQ Workshop - Practical Bayesian Optimization for Urban Transportation System Simulators - Laura Schultz and Vadim Sokolov, May 14, 2019

  • 1. 1/37 Bayesian Optimization Framework for Urban Transportation System Simulators MUMS Transition Workshop at SAMSI Laura Schultz and Vadim Sokolov May 14, 2019
  • 2. 2/37 Transportation Simulator outputs: flows (passenger, transit, vehicles) x = parameters we are certain about (day of week, special event attributes, season) θ = unobserved parameters, e.g. traveler’s preferences value of time, transit bias, .... outputs = φ(x, θ) But do they accurately depict the real world?
  • 3. 3/37 Objective and Challenges Relationship between field observations y and the simulated values: yi = φ (xi , θ) + (xi ) + ei Use Maximum Likelihood Estimate we want to fit predicted flows φ(θ) to average field traffic conditions ˆy minimizeθ ||ˆy − φ(θ)||2 2 Challenges No derivatives Simulator is stochastic Each simulator run takes few hours on a fast desktop θ is high dimensional
  • 4. 4/37 Bayesian Approach L(θ, y, x) = 1 N N i=1 ||yi − φ(xi , θ)||2 2 → minimize 1. Put prior on continuous functions C[0, ∞] 2. Repeat for k = 1, 2, . . . 3. Evaluate φ at θk 1 , . . . , θk nk 4. Compute a posterior (integrate) 5. Decide on the next batch of points to be explored θk+1 1 , . . . , θk+1 nk+1 Posterior minima is the solution to our optimization problem
  • 5. 5/37 Bayesian Optimization: Gaussian Process Use Gaussian process surrogate for the loss function L(θ, x, y) ∼ GP(m(θ), k(θ | x, y, γ) + σ2 δθ) m(θ) = E[L(θ)], k(θ, θ ) = E[(θ − m(θ))T (θ − m(θ )] Typical choice exponentially decaying relation kSE (θ, θ | γ) = σ2 exp − 1 2 θ − θ λ 2 γ = (σ2, λ)
  • 6. 6/37 Gaussian Process: Prior 4 3 2 1 0 1 2 3 4 1.0 0.5 0.0 0.5 1.0 4 3 2 1 0 1 2 3 4 2.5 2.0 1.5 1.0 0.5 0.0 0.5 1.0 λ = 10, σ = 1 λ = 5, σ = 1 4 3 2 1 0 1 2 3 4 3 2 1 0 1 2 4 3 2 1 0 1 2 3 4 2 1 0 1 2 3 λ = 2, σ = 1 λ = 1, σ = 1
  • 7. 7/37 Gaussian Process: Prior 4 3 2 1 0 1 2 3 4 12.5 10.0 7.5 5.0 2.5 0.0 2.5 5.0 7.5 4 3 2 1 0 1 2 3 4 3 2 1 0 1 2 λ = 2, σ = 16 λ = 2, σ = 1
  • 8. 8/37 Bayesian Optimization: Gaussian Process Use Gaussian process surrogate for the loss function L(θ, x, y) ∼ GP(m(θ), k(θ | x, y, γ) + σ2 δθ) m(θ) = E[L(θ)], k(θ, θ ) = E[(θ − m(θ))T (θ − m(θ )] Typical choice exponentially decaying relation kSE (θ, θ | γ) = σ2 exp − 1 2 θ − θ λ 2 γ = (σ2, λ)
  • 9. 9/37 Gaussian Process: Prior 4 3 2 1 0 1 2 3 4 1.0 0.5 0.0 0.5 1.0 4 3 2 1 0 1 2 3 4 2.5 2.0 1.5 1.0 0.5 0.0 0.5 1.0 λ = 10, σ = 1 λ = 5, σ = 1 4 3 2 1 0 1 2 3 4 3 2 1 0 1 2 4 3 2 1 0 1 2 3 4 2 1 0 1 2 3 λ = 2, σ = 1 λ = 1, σ = 1
  • 10. 10/37 Gaussian Process: Prior 4 3 2 1 0 1 2 3 4 12.5 10.0 7.5 5.0 2.5 0.0 2.5 5.0 7.5 4 3 2 1 0 1 2 3 4 3 2 1 0 1 2 λ = 2, σ = 16 λ = 2, σ = 1
  • 11. 11/37 Taxi Example We use Emukit Playground. It simulates operations of a taxi fleet. Profit = φ(Number of Taxis) → maximize
  • 15. 15/37 Taxi Example: Optimization Algorithm Takes Over
  • 18. 18/37 Taxi Example: Optimization Algorithm Optimal Number of Taxis is 30
  • 19. 19/37 Bayesian Optimization: Next Point to Evaluate Bayesian optimal design: find a point that maximizes a utility function θ+ = arg max θ∈D E[U (θ, L, γ)] Where L is predictive output (loss) under the GP model at point θ E[U (θ, L, γ))] = y U(θ, L, γ)p(L | θ, γ)dy For a fully Bayesian approach E[U (θ, L, γ))] = γ L U(θ, L, γ)p(L | θ, γ)p(γ)dLdγ
  • 20. 20/37 Bayesian Optimization: Next Point to Evaluate E[U (θ, L, γ))] = L U(θ, L, γ)p(L | θ, γ)dL We can account for the size of improvement (Expected Improvement) U(θ) = max(0, L∗ − L(θ)) Another recent alternative is upper confidence bound aUCB(θ; β) = m(θ) − βσ(θ) Explicit exploitation-exploration trade-off, parametrized by β. Extra hyper-parameter!
  • 21. 21/37 Complex Transportation Models The models that need calibration cover cities, not blocks Detroit has 28,000 road segments with 15 million trips Each run takes hours to days on a fast desktop θ is often very high dimensional and sensitive
  • 22. 22/37 Transportation ABM Example: POLARIS Activity-based demand generation Static variables θ include probability of activities, distribution settings for assigning attributes, etc.
  • 23. 23/37 High Dimensional θ Exploit model structure Better design (e.g. global + local acquisition function) Make simulator run faster (e.g. reduced order model) Parallelize Reduce dimensionality (Higdon, 2008) More flexible mean and covariance (Gramacy 2009, Gramacy 2011) Curse of dimensionality in 20 dimensions with 1 second per run grid with 10 points per dimension 1020 = 3.171 trillion years evaluate at 220 corners = 12.14 days Even checking for optimality is troublesome, there e20 directions whose pairwise angle is > 60 degrees = 5615 days
  • 24. 24/37 Computational Framework for HPC HPC Manager: Submits simulation jobs to compute nodes Tracks the execution status. Resubmits failed jobs and postprocesses outputs of successful jobs Sfit/T based scripts on Linux and vanilla Python on Window
  • 25. 25/37 Batch Optimization Acquisition function gives 1 point to evaluate What if we have a parallel machine and can run multiple jobs at once? Instead of using k best points we use approach that prefers to explore Expected information gain is used instead
  • 26. 26/37 Dimensioanlity Reduction Basic idea θ ≈ g(ψ(θ)), ψ : Rn → Rm , m < n Given Θ = (θ1, . . . , θK ), Y = (y1, . . . , yK ) PCA: ψ(θ) = ΨT θ, ˆΨ = arg minΨ∈Sm Θ − ΨΨT Θ 2 F , Sm = {Ψ ∈ Rn×m | ΨT Ψ = I} PLS: ˆΨ = arg maxΨ∈Sm Cov(ΨT Θ, y) Active Subspace: ψ(θ) = ΨT θ, Ψ are loadings of the gradient of the simulator φ(θ) Our approach: ˆW = arg min W θ − g(ψ(θ)) 2 + Y − q(ψ(θ)) 2 g, ψ, q are neural networks parametrized by W , we estimate those jointly
  • 27. 27/37 Bayesion Optimizaiton in Projected Space For our framework, 2 objectives exist: Capture lower-dimensional structure (m < n) Reconstruct any recommendation to original dimension µ(x | D) = µ0(x) + mT ψ(x) σ2 (x | D) = φ(x)T K−1ψ(x) + 1/β, where m = βK−1ΦT ˜y K = βΨT Ψ + Iα.
  • 28. 28/37 Dimensionality Reduction: Active Subspaces Main idea: Apply PCA to function gradients [Russi 2010] L(θ) θ1 θ2 Generalization of approach that picks most sensitive inputs Replace individual inputs by linear combinations
  • 29. 29/37 Dimensionality Reduction: Active Subspaces C = ( θL)( θL)T dθ = W ΛW T C is a sum of semi-positive definite rank-one matrices. Partition the eigenvector matrix W = [W1, W2], eigenvectors W1 correspond to m largest eigenvalues Project and separate the subspaces θ = ΨΨT θ = Ψ1ΨT 1 θ + Ψ2ΨT 2 θ θa = ΨT 1 θ is active subspace and Ψ1 is the reconstruction operator ( θa L)( θa L)T dθ = λ1 + . . . + λm
  • 30. 30/37 Dimensionality Reduction: Active Subspaces Use Monte Carlo to approximate variance of the gradient Uniformly sample θ1, . . . , θN and compute θLj = θL(θj ) C = ( θL)( θL)T dθ ≈ 1 N N j=1 ( θLj )( θLj )T = ˆW ˆΛ ˆW T ˆC Can use bootstrap to estimate variance of ˆC Due to Gittens and Trop (2011) If N = Ω Mλ1 λ2 ke2 log n , then |λk − ˆλk| ≤ eλk, dist(W1, ˆW1) ≤ 4λ1e λm − λm+1
  • 31. 31/37 Dimensionality Reduction: Autoencoder Consider a simple example f (x) = |uT x| + 0.1 ∗ sin(100x2) u is samples uniformly in [-1,1], x ∈ R100, f : R100 → R 1.00 0.75 0.50 0.25 0.00 0.25 0.50 0.75 1.00 0.0 0.2 0.4 0.6 0.8 1.0 1.00 0.75 0.50 0.25 0.00 0.25 0.50 0.75 1.00 0.0 0.2 0.4 0.6 0.8 1.0 x0 vs f (x) FDL reconstruction FDL(x) = ReLU(wT x) + ReLu(−wT x), ReLu(z) = max(z, 0)
  • 32. 32/37 Dimensionality Reduction: Nonlinear More flexible projections (non-linearities) Ability to capture local patterns Ignoring smaller variational directions no longer an issue No need for derivatives
  • 33. 33/37 Time Dependency Traffic trends are time-dependent Thus far, we implicitly account for it Our objective function currently averages over space and time Deep Learner learns and captures in smaller dimension representation
  • 34. 34/37 Mean Improvement However, a Deep Learner mean function is an opportunity to explicitly capture this relationship GP mean function influences predicted posterior and acquisition function GP standard assumption was to set mean to zero
  • 36. 36/37 References Gramacy, R. B., & Lee, H. K. (2012). Cases for the nugget in modeling computer experiments. Statistics and Computing, 22(3), 713-722. Srinivas, N., Krause, A., Kakade, S. M., & Seeger, M. (2009). Gaussian process optimization in the bandit setting: No regret and experimental design. arXiv preprint arXiv:0912.3995. Snoek, J., Larochelle, H., & Adams, R. P. (2012). Practical bayesian optimization of machine learning algorithms. In Advances in neural information processing systems (pp. 2951-2959). Higdon, D., Gattiker, J., Williams, B., & Rightley, M. (2008). Computer model calibration using high-dimensional output. Journal of the American Statistical Association, 103(482), 570-583. Russi, T. M. (2010). Uncertainty quantification with experimental data and complex system models (Doctoral dissertation, UC Berkeley) Gittens, A., & Tropp, J. A. (2011). Tail bounds for all eigenvalues of a sum of random matrices. arXiv preprint arXiv:1104.4513.
  • 37. 37/37 Discussion Gaussian process is a practical surrogate for transportation agent-based models High dimensionality can be treated using Active Subspace (requires derivatives) or DL Deep learning surrogate is less understood but leads to better results Next steps: DL dimensionality reduction combine GP with DL for surrogate model opening up the black box (utilize network structure, do not approximate demand simulator)