SlideShare a Scribd company logo
1 of 91
Power systems and uncertainties:
revisited through games
Intro: uncertainties in PS
Power systems.
Bias, bias correction: by resampling.
Non-stochastic uncertainties: Nash method.
If time enough, portfolio methods (?).
2
Power systems : key issues
● Real time: Network topology optimization
● Short term: When to use to hydroelectricity ?
● Long term :
– Should we build big connections France-XXX ?
– Should we build big wind farms ?
==> with X billions, what should I do ?
Goal: build tools and answer these questions
3
Power systems : old and new
● Old: production → transmission → distribution
● New: all mixed + more uncertainties
+ renewable
+ storage
- CO
2
?
+ distributed
+ demand-side
+ HVDC
? ? ?
4
Power systems
Argmax U argmax E reward
investment techn. policy weather
geopol.
5
Power systems
Argmax U argmax E reward
investment techn. policy weather
geopol.
6
Power systems
Argmax U argmax E reward
investment techn. policy weather
geopol. (+ load, faults, ...)
7
Power systems
Argmax U argmax E reward
investment techn. policy weather
geopol.
8
Power systems
Argmax U argmax E reward
investment techn. policy weather
geopol.
9
Power systems
Argmax U argmax E reward
investment techn. policy weather
geopol.
10
Power systems
Argmax U argmax E reward
investment techn. policy weather
geopol.
Paral.
stochastic
optim.
11
Power systems
Argmax U argmax E reward
investment techn. policy weather
geopol.
Wald,
Savage,
Nash
Paral.
stochastic
optim.
12
Power systems
Argmax U argmax E reward
investment techn. policy weather
geopol.
Wald,
Savage,
Nash
Paral.
stochastic
optim.
Discrete
time
control
13
Power systems
Argmax U argmax E reward
investment techn. policy weather
geopol.
Wald,
Savage,
Nash
Paral.
stochastic
optim.
Discrete
time
control
Bias
control
14
Power systems
Argmax U argmax E reward
investment techn. policy weather
geopol.
Wald,
Savage,
Nash
Paral.
stochastic
optim.
Discrete
time
control
Bias
control
High dimensional
M-estimation
Model-selection
``derandomization''
15
Power systems
Argmax U argmax E reward
investment techn. policy weather
geopol.
Wald,
Savage,
Nash
Paral.
stochastic
optim.
Discrete
time
control
Bias
control
High dimensional
M-estimation
Model-selection
``derandomization''
``very'' open issue.
10 years ok.
What about 40 years ?
16
Various uncertainties
● Data unavailable
● Stochastic
– Weather <== average on data is (≈) ok
– Faults <== average on data is not ok (otherwise
many lines have fault probability zero)
● Non stochastic (truth hidden or partly adversarial)
– Technology
– Gas curtailment
– CO2 penalization
Power systems and uncertainties
Bias correction
Non stochastic uncertainties
Noisy optimization
When average on
data; uncertainty
= coverage
When we have a
model of
stoc. uncertainties
Oh my god how to deal with that ?
Bias correction
Maximum height, from a finite sample:
The maximum height of 13 randomly drawn humans
is less than the maximum height of all humans.
Bias correction, more surprising
95% quantile of height, from a finite sample:
The .95 quantile of height of 13 randomly drawn humans
is less than the .95 quantile of all humans.
R=[];for
i=1:1000;x=randn(10000,1);q=empirical_inv(.95,x);aq=empirical_inv(.95,x(1:100));R=[R;q,aq];end;
mean(R(:,1)>R(:,2))
Bias correction, capacity
● Optimal capacity ?
Optimal capacities = argmin E cost(x,random)
approx. capacities = argmin ( cost(x,r1) + … + cost(x,rn) )
● Also biased! Usually, approx. capacities << opt. capacities.
● The optimal capacity on average over a sample is less than the
“real” optimal capacity.
Surprising ?
Bias correction by bootstrap, ideas
Definitions:
● x0 = Real optimum, against the real (unknown) distribution
● x1 = Optimum against the sample
● x2 = Optimum against a resample of the sample
Idea:
● x2 is to x1, what x1 is to x0. (Efron)
● So E(x2-x1) = E(x1-x0) (approx)
● So Ex0 = 2Ex1 - Ex2 (approx)
● So let us use 2x1 – Ex2 (expectation over resamplings)
N among N
Bias correction by bootstrap
Standard approximate minimizes:
Bootstrap correction:
Classical
approximation
(twice!)
Bootstrap
Version.
Bias correction by Jackknife
(often better, for bias correction)
Standard approximate minimizes:
Jackknife correction:
Optimum on
all but one
Cross validation
● For example, 10 scenarios (possible winters)
● You have an algorithm for choosing the optimal
capacity.
● How to check that it works ?
● Bad solution: apply your algorithm on the 10
scenarios.
● Better solution: apply your algorithm on 9 scenarios,
check on the 10th
(and then rotate)
==> 100% mandatory in machine learning research
==> not widely used in capacity expansion planning
Bias correction: many tricks
● Apply both the classical estimate, the new one,
variants of the new one, and choose between
them by cross-validation <== excellent, but
expensive
● Average the bias correction over multiple
homogeneous capacities
● Prefer the correction only if crossval predicts a
clear success (at least 10%) <== safe
Good trick when
you want a new
tool to be accepted
by users
Bias correction
Bias & bias correction
Main take-home messages:
– The cost of stochastic systems is usually
underestimated (bias 1)
– Optimal capacities are usually underestimated (bias
2)
A simple alternate method:
check by cross-validation if
+1%, +2%, +4%, +8%, +16%
are better than +0%
Power systems and uncertainties
Non stochastic uncertainties:
Wald, Savage and Nash
The problem: non probabilistic uncertainties
We have plenty of options, there are plenty of
possible scenarios.
No probability + computing one entry takes time.
What should we decide ?
SCENARIOS
OPTIONS
Non stochastic uncertainties
● Uncertainties:
– Stochastic: weather, load
– Not stochastic: CO2 penalization, gas curtailment... (typically,
scenario)
● Model: R(k,s) = reward if option k, scenario s
● How to deal with it ?
– Scenario based: human checks all R(k,s), make a decision :-)
– Wald: choose k maximizing min R(k,.) <== worst case
– Savage:
● R'(k,s)= max R(.,s) – R(k,s)
● k minimizing max R'(k,.) <== best “regret”
In both cases, this is optimal
when nature decides after us
and knows what we have chosen:
● Wald: Nature knows and hates us
● Savage: Nature wants us to have regrets
Doesn't it look like a game ?
We are ok for “Nature” to be adversarial, but not
an opponent who knows our decision ==>
Nature decides first, privately.
0 1 -1
-1 0 1
1 -1 0
Rock / paper/scissor
Rock/paper/scissor
Non stochastic uncertainties
a (hard to read) proposal
Nash:
s (probabilistic) such that max E R(.,s) minimal
eq. k (probabilistic!) such that min E R(k,.) maximal
==> worst case, for a Nature who does not
know our decision
==> slightly philosophical, do you prefer Wald,
Savage, or Nash, or a expert decision ?
Non stochastic uncertainties
clearer proposal (hopefully)
Given:
● A list of “s” scenarios (possibly huge)
● A list of “k” options (possibly huge)
● A function R(.,.)
Output:
● a sublist of scenarios (approx. Nash optimal),
● a sublist of options (approx. Nash optimal),
● probabilities on them
==> cheaper than Wald / Savage, and does selection
Comp. Time ~ m log(m)
where m= #K + #S
K policies, S scenarios (no
probabilities), how to make a
decision ?
Short version
Intuitively cool: we naturally select a subset of scenarios / options, and
Nature decides privately, before us.
Practically cool: fast.
Problem: output = stochastic decision (you decide nuclear power plants
with a dice ?)
SCENARIOS
OPTIONS
Selected
subset
Power systems and uncertainties
WHEN UNCERTAINTIES ARE A
STOCHASTIC MODEL
Our experience:
- noisy optimization is a nightmare
- hard to find the best algorithm
- don't choose: combine ==>
portfolio
Noisy optimization
●
Two families of optimization principles
– Optimizing in front of a (corrected) archive of possibilities
– Optimizing in front of a stochastic model
● Pros/cons:
– Archive = real data
– Model = can deal with extreme events and new scenarios <== faults in
networks (the death of the N-1 criterion ?)
==> this part, **robust** noisy optimization,
thanks to portfolio methods
Portfolio methods in noisy
optimization
I. Introduction: portfolios
II.Portfolio methods
III. Portfolio NOT in noisy optimization
IV. Portfolio methods in noisy optimization
Who is the best ? ==> portfolio
methods (portfolio = pissing contest...)
Alg. 1 Alg. 2 Alg. 3 Alg. 4
Games
MC (Monte
Carlo)
MCTS Alphabeta
DPS (direct
policy search)
Control PID MCTS Fuzzy Bellman-style
Supervised ML
on R^d
NeuralNet SVM Fuzzy Decision trees
Linear reg.
Gen. linear
regression
...
Feature
selection
Feature ranking
Embedded
approaches
Optimization
Bandits UCB variants Uniform
SR (successive
reject)
==> The best algorithm is not always the same, by far.
How to design robust algorithms ?
FPS
Who is the best ?
(pissing contest...)
Alg. 1 Alg. 2 Alg. 3 Alg. 4
Games
MC (Monte
Carlo)
MCTS Alphabeta
DPS (direct
policy search)
Control PID MCTS Fuzzy Bellman-style
Supervised ML
on R^d
NeuralNet SVM Fuzzy Decision trees
Linear reg.
Gen. linear
regression
...
Feature
selection
Feature ranking
Embedded
approaches
Optimization
Bandits UCB variants Uniform
SR (successive
reject)
==> The best algorithm is not always the same, by far.
How to design robust algorithms ?
Chess
Who is the best ?
(pissing contest...)
Alg. 1 Alg. 2 Alg. 3 Alg. 4
Games
MC (Monte
Carlo)
MCTS Alphabeta
DPS (direct
policy search)
Control PID MCTS Fuzzy Bellman-style
Supervised ML
on R^d
NeuralNet SVM Fuzzy Decision trees
Linear reg.
Gen. linear
regression
...
Feature
selection
Feature ranking
Embedded
approaches
Optimization
Bandits UCB variants Uniform
SR (successive
reject)
==> The best algorithm is not always the same, by far.
How to design robust algorithms ?
Go, Hex
Who is the best ?
(pissing contest...)
Alg. 1 Alg. 2 Alg. 3 Alg. 4
Games
MC (Monte
Carlo)
MCTS Alphabeta
DPS (direct
policy search)
Control PID MCTS Fuzzy Bellman-style
Supervised ML
on R^d
NeuralNet SVM Fuzzy Decision trees
Linear reg.
Gen. linear
regression
...
Feature
selection
Feature ranking
Embedded
approaches
Optimization
Bandits UCB variants Uniform
SR (successive
reject)
==> The best algorithm is not always the same, by far.
How to design robust algorithms ?
Fog
of war
Who is the best ?
(pissing contest...)
Alg. 1 Alg. 2 Alg. 3 Alg. 4
Games
MC (Monte
Carlo)
MCTS Alphabeta
DPS (direct
policy search)
Control PID MCTS Fuzzy Bellman-style
Supervised ML
on R^d
NeuralNet SVM Fuzzy Decision trees
Linear reg.
Gen. linear
regression
...
Feature
selection
Feature ranking
Embedded
approaches
Optimization
Bandits UCB variants Uniform
SR (successive
reject)
==> The best algorithm is not always the same, by far.
How to design robust algorithms ?
For
publishing
Who is the best ?
(pissing contest...)
Alg. 1 Alg. 2 Alg. 3 Alg. 4
Games
MC (Monte
Carlo)
MCTS Alphabeta
DPS (direct
policy search)
Control PID MCTS Fuzzy Bellman-style
Supervised ML
on R^d
NeuralNet SVM Fuzzy Decision trees
Linear reg.
Gen. linear
regression
...
Feature
selection
Feature ranking
Embedded
approaches
Optimization
Bandits UCB variants Uniform
SR (successive
reject)
==> The best algorithm is not always the same, by far.
How to design robust algorithms ?
When it
must work
The design of robust algorithms
● Do not run only one algorithm.
● Run several algorithms, concurrently.
The design of robust algorithms
● Do not run only one algorithm.
● Run several algorithms, concurrently.
● Pick up the best response.
The design of robust algorithms
● Do not run only one algorithm.
● Run several algorithms, concurrently.
● Pick up the best response.
if for all problems, out of 11 algorithms:
ten of the algorithms need >= 40 hours to complete,
but one algorithm completes in 10 minutes (not always the same)
The design of robust algorithms
● Do not run only one algorithm.
● Run several algorithms, concurrently.
● Pick up the best response.
if for all problems, out of 11 algorithms:
ten of the algorithms need >= 40 hours to complete,
but one algorithm completes in 10 minutes (not always the same)
Then:
● Average time =110 minutes (10 minutes, multiplied by 11 for concur.)
+ no variance
The design of robust algorithms
● Do not run only one algorithm.
● Run several algorithms, concurrently.
● Pick up the best response.
if for all problems, out of 11 algorithms:
ten of the algorithms need >= 40 hours to complete,
but one algorithm completes in 10 minutes (not always the same)
Then:
● Average time =110 minutes (10 minutes, multiplied by 11 for concur.)
+ no variance
● Whereas with sequential runs a.t. = 200 hours+10minutes + variance
Portfolio methods: you already do it
Usually, we compare algorithms offline in large datasets.
Possible improvements:
● online comparison ?
● problem-specific comparison ?
● hybridization ?
● different budget / algo ?
Portfolio methods: you already do it
Usually, we compare algorithms offline in large datasets.
Possible improvements:
● online comparison ?
● problem-specific comparison ?
● hybridization ?
● more budget for better solvers
==> this is portfolio
Portfolio methods
I. Introduction: ML and portfolios
II.Portfolio methods: basics
III. Portfolio NOT in noisy optimization
IV. Portfolio methods in noisy optimization
Glossary (1)
● Offline: we decide the best in advance (from related
datasets)
● Online: dynamically
Glossary (1)
● Offline: we decide the best in advance (from related
datasets)
● Online: dynamically
● Fair: all algs use the same budget
● Unfair: more for the (seemingly) best
Glossary (2)
● Sharing: algorithms can provide information to
each other
Glossary (2)
● Sharing: algorithms can provide information to each
other
● Chaining: current best predictor / controller is used
by all algorithms (implies that all algorithms have
related representations)
Glossary (2)
● Sharing: algorithms can provide information to each
other
● Chaining: current best predictor / controller is used
by all algorithms (implies that all algorithms have
related representations)
● Parallel: each solver on (at least) one core
Glossary (2)
● Sharing: algorithms can provide information to
each other
● Chaining: current best predictor / controller is
used by all algorithms (implies that all algorithms
have related representations)
● Parallel: each solver on (at least) one core
● Anytime algorithm: performs well in spite of not
knowing its computation time in advance
Competence map
● Competence map = mapping
(problem features PF, algorithm features AF )
==> CM(PF,AF)=predicted performance
● Choose algorithm = argmax CM(PF,.)
Portfolio methods
I. Introduction: ML and portfolios
II.Portfolio methods
III. Portfolio NOT in noisy optimization
IV. Portfolio methods in noisy optimization
Most classical portfolio:
combinatorial optimization
● Why combinatorial optimization ?
– Huge difference between performances of algorithms (prob. dep.)
– Checking who is the best is easy
Most classical portfolio:
combinatorial optimization
● Why combinatorial optimization ?
– Huge difference between performances of algorithms (prob. dep.)
– Checking who is the best is easy
● Results:
– Stable, robust, competitive even against specialized tools in each category
– Routinely wins SAT competitions (cfhttp://www.cs.ubc.ca/labs/beta/Projects/SATzilla/)
==> something has moved in AI :-)
(not only deep networks :-) )
1. What does the state of the art
tells us in portfolio ?
No free lunch [Wolpert & many]: all algorithms
perform equally on average on all optimization
problems ==> so we can not find “one” optimal
algorithm.
Does this justify portfolio ?
● Yes, there is no optimal algorithm (so ==> portfolio...).
2. What does the state of the art
tells us in portfolio ?
No free lunch [Wolpert & many]: all algorithms
perform equally on average on all optimization
problems ==> so we can not find “one” optimal
algorithm.
Does this justify portfolio ?
● Yes, there is no optimal algorithm.
● No: even the portfolio is not better :-)
3. What does the state of the art
tells us in portfolio ?
Samulowitz and Memisevic:
<< use orthogonal solvers in your portfolio >>
i.e. try not to have similar solvers in your
portfolio
3. What does the state of the art
tells us in portfolio ?
Samulowitz and Memisevic:
<< use orthogonal solvers in your portfolio >>
i.e. try not to have similar solvers in your
portfolio ==> we will see a proof of this later.
4. What does the state of the art
tells us in portfolio ?
What if all solvers are indeed one solver with various
parametrizations ?
Then you have a structure, a metric, on your solvers ==>
this is parameter tuning ==> bilevel optimization rather
than really “portfolio”.
==> Here, unstructured family of solvers.
==> See “portfolio / parameter tuning” in [Kotthoff 2012]
5. What does the state of the art
tells us in portfolio ?
Unfair distribution of budget:
● 50% for best, 25% for second, 12.5% for third,
etc. [Gagliolo, Schmidthuber '05'06]
5. What does the state of the art
tells us in portfolio ?
Unfair distribution of budget:
● 50% for best, 25% for second, 12.5% for third,
etc. [Gagliolo, Schmidthuber '05'06]
● Finite time with fair budget, then only the best.
[Pulina, Tacchella '09 + detailed comparison]
5. What does the state of the art
tells us in portfolio ?
Unfair distribution of budget:
● 50% for best, 25% for second, 12.5% for third,
etc. [Gagliolo, Schmidthuber '05'06]
● Finite time with fair budget, then only the best.
[Pulina, Tacchella '09 + detailed comparison]
● 90% for offline best, 10% for all others (just in
case!) [Kadioglu et al, '11]
5. What does the state of the art
tells us in portfolio ?
Unfair distribution of budget:
● 50% for best, 25% for second, 12.5% for third,
etc. [Gagliolo, Schmidthuber '05'06]
● Finite time with fair budget, then only the best.
[Pulina, Tacchella '09 + detailed comparison]
● 90% for offline best, 10% for all others (just in
case!) [Kadioglu et al, '11]
● Best first, even if fair (if not parallel).
6. What does the state of the art
tells us in portfolio ?
Parallelism is great & easy!
Many experiments in [Hamadi 2013].
7. What does the state of the art
tells us in portfolio ?
Chaining & sharing: tricky but can work... we
did not succeed much with that in ML :-(
[see Vassilevska et al. 06]
Portfolio methods
I. Introduction: ML and portfolios
II.Portfolio methods
III. Portfolio NOT in ML (optim, bio)
IV. Portfolio methods in noisy optimization
Portfolio in noisy optimization
Key difference: checking performance takes
time & is uncertain:
– hold out + test (supervised learning on dataset)
– or random generation of new test cases (control)
Comparing two candidates might take more time than
generating them!
Waow! How comparing can take
more time than solving ? (Black-box case)
Example: parametric controller
● Black-box f(x,w): test my controller
– if I have parameter x;
– if random processes = w
– and returns the performance
● Randomized black-box f(x):
– if I have parameter x
– with w randomly drawn
– and returns the performance
Black box noisy optimization:
● I request f(x1),f(x2),...,f(xn)
● I propose x
● The real optimum is x*
● My regret is Rn=f(x*)-f(x)
In many cases (papers by Fabian,
Chen, Shamir) I get
E Rn=O(1/n)
i.e. budget = 1/precision
whereas comparing xA and xB:
● Ef(xA) estimated std O(1/sqrt(n))
● Ef(xB) estimated std O(1/sqrt(n))
==> budget = 1/precision ! ! !
Why ? Should be easier!
No, because in the first case we
sample “far” from x* and “learn” the
shape of E f
Waow! How comparing can take
more time than solving ? (White-box case)
Example: parametric controller
● White-box f(x,w): test my controller
– if I have parameter x;
– if random processes = w
– and returns the performance
+the gradient
● Randomized black-box f(x):
– if I have parameter x
– with w randomly drawn
– and returns the performance
+the gradient
Black box noisy optimization:
● I request f(x1),f(x2),...,f(xn) + grads
● I propose x
● The real optimum is x*
● My regret is Rn=f(x*)-f(x)
==> similar rates!
whereas comparing xA and xB:
● Ef(xA) estimated std O(1/sqrt(n))
● Ef(xB) estimated std O(1/sqrt(n))
● ==> budget = 1/precision ! ! !
==> still true!
Key point
● There are many problems for which the natural criterion is
noisy and leads to rates of the form regret = O(1/nα
)
●
And the comparison is O(1/nβ
)
==> Noisy optimization,
with or without gradients;
or just with comparisons;
==> or many maximum likelihood / ERM / SRM rates
ok, not always, but often :-)
So let us consider this case.
sn, rn and lag
Many solutions for writing portfolios.
In all cases, you have roughly these parameters:
sn, rn and lag
Many solutions for writing portfolios.
In all cases, you have roughly these parameters:
● How frequently you compare
sn, rn and lag
Many solutions for writing portfolios.
In all cases, you have roughly these parameters:
● How frequently you compare
● How precisely you compare (adaptive ? Maybe
but no “race” (or you might spend a huge time) )
s(n), r(n) and lag(u)
Many solutions for writing portfolios.
In all cases, you have roughly these parameters:
● How frequently you compare
(r(n) = #evals at nth
comparison)
● How precisely you compare
(s(n) = budget for nth
comparison)
● More surprising, who you compare: old outputs
(lag(u) << u is the index of compared outputs)
Because old outputs are cheaper to compare!
Drawback: fair budget. ==> Lazy evaluation: do evaluations only when they are needed
● for comparison (cheap because lag(u)<<u)
● or for the output (cheap because only a few algos – the optimal ones)
Almost readable theorem
If solver i has regret (C(i)+o(1)) / nα(i)
, and
then,
● NOPA (noisy opt. portfolio alg.) has the log(m) shift ;
● and INOPA (improved Nopa) has the log(m') shift.
Readable version: NOPA
There exist universal parameters, such that
●
if rates are f(xn)-f(x*) ~ 1/nα
● then NOPA just has a log(m) shift (m=#solvers)
log(time)
log(regret)
Best solverBest solver
Portfolio
log(m)
Readable version: INOPA
There exist universal parameters, such that
●
if rates are f(xn)-f(x*) ~ 1/nα
● then INOPA has a log(m') shift (m'=#optimal solvers)
log(time)
log(regret)
Best solverBest solver
Portfolio
log(m')
Readable version: INOPA
There exist universal parameters, such that
●
if rates are f(xn)-f(x*) ~ 1/nα
● then INOPA has a log(m') shift (m'=#optimal solvers)
log(time)
log(regret)
Best solverBest solver
Portfolio
log(m')
ORTHOGO
NALITY ! ! !
Conclusions
● Portfolio = revolution in comb. optim.
– Simple
– Compliant with parallelism
– Robust (even against bug!)
– Winning many competitions
● In noisy optimization
– Can be more tricky, because comparing is
expensive
– But great for robustness, ML is so variable!
Power systems and uncertainties
Bias correction: a different path to robustness; easy to apply,
but expensive (parallel); at least, check your bias (cross-
validation)
Scenarios: Nash = more complicated, but natural and
selecting scenarios. But random recommendation :-)
Noisy optimization: a new application for portfolio methods.
Energy transition debate
Energy transition debate
Power systems / energy transition
● Public debate is very dirty
● Claims based on proprietary codes/studies (or no
study at all...)
● It's so easy to cheat and get the numbers you want
==> duty: prefer open source / open data

More Related Content

Viewers also liked

Multimodal or Expensive Optimization
Multimodal or Expensive OptimizationMultimodal or Expensive Optimization
Multimodal or Expensive OptimizationOlivier Teytaud
 
Noisy optimization --- (theory oriented) Survey
Noisy optimization --- (theory oriented) SurveyNoisy optimization --- (theory oriented) Survey
Noisy optimization --- (theory oriented) SurveyOlivier Teytaud
 
Tools for artificial intelligence
Tools for artificial intelligenceTools for artificial intelligence
Tools for artificial intelligenceOlivier Teytaud
 
Meta Monte-Carlo Tree Search
Meta Monte-Carlo Tree SearchMeta Monte-Carlo Tree Search
Meta Monte-Carlo Tree SearchOlivier Teytaud
 
Energy Management Forum, Tainan 2012
Energy Management Forum, Tainan 2012Energy Management Forum, Tainan 2012
Energy Management Forum, Tainan 2012Olivier Teytaud
 
Introduction to the TAO Uct Sig, a team working on computational intelligence...
Introduction to the TAO Uct Sig, a team working on computational intelligence...Introduction to the TAO Uct Sig, a team working on computational intelligence...
Introduction to the TAO Uct Sig, a team working on computational intelligence...Olivier Teytaud
 
Inteligencia Artificial y Go
Inteligencia Artificial y GoInteligencia Artificial y Go
Inteligencia Artificial y GoOlivier Teytaud
 
Computers and Killall-Go
Computers and Killall-GoComputers and Killall-Go
Computers and Killall-GoOlivier Teytaud
 
Stochastic modelling and quasi-random numbers
Stochastic modelling and quasi-random numbersStochastic modelling and quasi-random numbers
Stochastic modelling and quasi-random numbersOlivier Teytaud
 
Machine learning 2016: deep networks and Monte Carlo Tree Search
Machine learning 2016: deep networks and Monte Carlo Tree SearchMachine learning 2016: deep networks and Monte Carlo Tree Search
Machine learning 2016: deep networks and Monte Carlo Tree SearchOlivier Teytaud
 
Complexity of planning and games with partial information
Complexity of planning and games with partial informationComplexity of planning and games with partial information
Complexity of planning and games with partial informationOlivier Teytaud
 
Combining UCT and Constraint Satisfaction Problems for Minesweeper
Combining UCT and Constraint Satisfaction Problems for MinesweeperCombining UCT and Constraint Satisfaction Problems for Minesweeper
Combining UCT and Constraint Satisfaction Problems for MinesweeperOlivier Teytaud
 
Theories of continuous optimization
Theories of continuous optimizationTheories of continuous optimization
Theories of continuous optimizationOlivier Teytaud
 
Dynamic Optimization without Markov Assumptions: application to power systems
Dynamic Optimization without Markov Assumptions: application to power systemsDynamic Optimization without Markov Assumptions: application to power systems
Dynamic Optimization without Markov Assumptions: application to power systemsOlivier Teytaud
 

Viewers also liked (16)

Multimodal or Expensive Optimization
Multimodal or Expensive OptimizationMultimodal or Expensive Optimization
Multimodal or Expensive Optimization
 
Noisy optimization --- (theory oriented) Survey
Noisy optimization --- (theory oriented) SurveyNoisy optimization --- (theory oriented) Survey
Noisy optimization --- (theory oriented) Survey
 
Tools for artificial intelligence
Tools for artificial intelligenceTools for artificial intelligence
Tools for artificial intelligence
 
Meta Monte-Carlo Tree Search
Meta Monte-Carlo Tree SearchMeta Monte-Carlo Tree Search
Meta Monte-Carlo Tree Search
 
Energy Management Forum, Tainan 2012
Energy Management Forum, Tainan 2012Energy Management Forum, Tainan 2012
Energy Management Forum, Tainan 2012
 
Introduction to the TAO Uct Sig, a team working on computational intelligence...
Introduction to the TAO Uct Sig, a team working on computational intelligence...Introduction to the TAO Uct Sig, a team working on computational intelligence...
Introduction to the TAO Uct Sig, a team working on computational intelligence...
 
Inteligencia Artificial y Go
Inteligencia Artificial y GoInteligencia Artificial y Go
Inteligencia Artificial y Go
 
Statistics 101
Statistics 101Statistics 101
Statistics 101
 
Computers and Killall-Go
Computers and Killall-GoComputers and Killall-Go
Computers and Killall-Go
 
Stochastic modelling and quasi-random numbers
Stochastic modelling and quasi-random numbersStochastic modelling and quasi-random numbers
Stochastic modelling and quasi-random numbers
 
Machine learning 2016: deep networks and Monte Carlo Tree Search
Machine learning 2016: deep networks and Monte Carlo Tree SearchMachine learning 2016: deep networks and Monte Carlo Tree Search
Machine learning 2016: deep networks and Monte Carlo Tree Search
 
Complexity of planning and games with partial information
Complexity of planning and games with partial informationComplexity of planning and games with partial information
Complexity of planning and games with partial information
 
Combining UCT and Constraint Satisfaction Problems for Minesweeper
Combining UCT and Constraint Satisfaction Problems for MinesweeperCombining UCT and Constraint Satisfaction Problems for Minesweeper
Combining UCT and Constraint Satisfaction Problems for Minesweeper
 
Theories of continuous optimization
Theories of continuous optimizationTheories of continuous optimization
Theories of continuous optimization
 
3slides
3slides3slides
3slides
 
Dynamic Optimization without Markov Assumptions: application to power systems
Dynamic Optimization without Markov Assumptions: application to power systemsDynamic Optimization without Markov Assumptions: application to power systems
Dynamic Optimization without Markov Assumptions: application to power systems
 

Similar to Power systems and uncertainties revisited through games

mcp-bandits.pptx
mcp-bandits.pptxmcp-bandits.pptx
mcp-bandits.pptxBlackrider9
 
CMU Lecture on Hadoop Performance
CMU Lecture on Hadoop PerformanceCMU Lecture on Hadoop Performance
CMU Lecture on Hadoop PerformanceMapR Technologies
 
pptx - Psuedo Random Generator for Halfspaces
pptx - Psuedo Random Generator for Halfspacespptx - Psuedo Random Generator for Halfspaces
pptx - Psuedo Random Generator for Halfspacesbutest
 
pptx - Psuedo Random Generator for Halfspaces
pptx - Psuedo Random Generator for Halfspacespptx - Psuedo Random Generator for Halfspaces
pptx - Psuedo Random Generator for Halfspacesbutest
 
Randomized Algorithm- Advanced Algorithm
Randomized Algorithm- Advanced AlgorithmRandomized Algorithm- Advanced Algorithm
Randomized Algorithm- Advanced AlgorithmMahbubur Rahman
 
STLtalk about statistical analysis and its application
STLtalk about statistical analysis and its applicationSTLtalk about statistical analysis and its application
STLtalk about statistical analysis and its applicationJulieDash5
 
Techniques in Deep Learning
Techniques in Deep LearningTechniques in Deep Learning
Techniques in Deep LearningSourya Dey
 
The Concurrent Constraint Programming Research Programmes -- Redux (part2)
The Concurrent Constraint Programming Research Programmes -- Redux (part2)The Concurrent Constraint Programming Research Programmes -- Redux (part2)
The Concurrent Constraint Programming Research Programmes -- Redux (part2)Pierre Schaus
 
Computer Vision: Feature matching with RANSAC Algorithm
Computer Vision: Feature matching with RANSAC AlgorithmComputer Vision: Feature matching with RANSAC Algorithm
Computer Vision: Feature matching with RANSAC Algorithmallyn joy calcaben
 
Parallelising Dynamic Programming
Parallelising Dynamic ProgrammingParallelising Dynamic Programming
Parallelising Dynamic ProgrammingRaphael Reitzig
 
The world of loss function
The world of loss functionThe world of loss function
The world of loss function홍배 김
 
Limits of Computation
Limits of ComputationLimits of Computation
Limits of ComputationJoshua Reuben
 
The Limits of Computation
The Limits of ComputationThe Limits of Computation
The Limits of ComputationJoshua Reuben
 
Aaron Roth, Associate Professor, University of Pennsylvania, at MLconf NYC 2017
Aaron Roth, Associate Professor, University of Pennsylvania, at MLconf NYC 2017Aaron Roth, Associate Professor, University of Pennsylvania, at MLconf NYC 2017
Aaron Roth, Associate Professor, University of Pennsylvania, at MLconf NYC 2017MLconf
 
机器学习Adaboost
机器学习Adaboost机器学习Adaboost
机器学习AdaboostShocky1
 
Introduction to conventional machine learning techniques
Introduction to conventional machine learning techniquesIntroduction to conventional machine learning techniques
Introduction to conventional machine learning techniquesXavier Rafael Palou
 
Machine Learning lecture4(logistic regression)
Machine Learning lecture4(logistic regression)Machine Learning lecture4(logistic regression)
Machine Learning lecture4(logistic regression)cairo university
 
44 randomized-algorithms
44 randomized-algorithms44 randomized-algorithms
44 randomized-algorithmsAjitSaraf1
 
Svm map reduce_slides
Svm map reduce_slidesSvm map reduce_slides
Svm map reduce_slidesSara Asher
 

Similar to Power systems and uncertainties revisited through games (20)

mcp-bandits.pptx
mcp-bandits.pptxmcp-bandits.pptx
mcp-bandits.pptx
 
CMU Lecture on Hadoop Performance
CMU Lecture on Hadoop PerformanceCMU Lecture on Hadoop Performance
CMU Lecture on Hadoop Performance
 
pptx - Psuedo Random Generator for Halfspaces
pptx - Psuedo Random Generator for Halfspacespptx - Psuedo Random Generator for Halfspaces
pptx - Psuedo Random Generator for Halfspaces
 
pptx - Psuedo Random Generator for Halfspaces
pptx - Psuedo Random Generator for Halfspacespptx - Psuedo Random Generator for Halfspaces
pptx - Psuedo Random Generator for Halfspaces
 
Randomized Algorithm- Advanced Algorithm
Randomized Algorithm- Advanced AlgorithmRandomized Algorithm- Advanced Algorithm
Randomized Algorithm- Advanced Algorithm
 
STLtalk about statistical analysis and its application
STLtalk about statistical analysis and its applicationSTLtalk about statistical analysis and its application
STLtalk about statistical analysis and its application
 
Techniques in Deep Learning
Techniques in Deep LearningTechniques in Deep Learning
Techniques in Deep Learning
 
The Concurrent Constraint Programming Research Programmes -- Redux (part2)
The Concurrent Constraint Programming Research Programmes -- Redux (part2)The Concurrent Constraint Programming Research Programmes -- Redux (part2)
The Concurrent Constraint Programming Research Programmes -- Redux (part2)
 
Direct policy search
Direct policy searchDirect policy search
Direct policy search
 
Computer Vision: Feature matching with RANSAC Algorithm
Computer Vision: Feature matching with RANSAC AlgorithmComputer Vision: Feature matching with RANSAC Algorithm
Computer Vision: Feature matching with RANSAC Algorithm
 
Parallelising Dynamic Programming
Parallelising Dynamic ProgrammingParallelising Dynamic Programming
Parallelising Dynamic Programming
 
The world of loss function
The world of loss functionThe world of loss function
The world of loss function
 
Limits of Computation
Limits of ComputationLimits of Computation
Limits of Computation
 
The Limits of Computation
The Limits of ComputationThe Limits of Computation
The Limits of Computation
 
Aaron Roth, Associate Professor, University of Pennsylvania, at MLconf NYC 2017
Aaron Roth, Associate Professor, University of Pennsylvania, at MLconf NYC 2017Aaron Roth, Associate Professor, University of Pennsylvania, at MLconf NYC 2017
Aaron Roth, Associate Professor, University of Pennsylvania, at MLconf NYC 2017
 
机器学习Adaboost
机器学习Adaboost机器学习Adaboost
机器学习Adaboost
 
Introduction to conventional machine learning techniques
Introduction to conventional machine learning techniquesIntroduction to conventional machine learning techniques
Introduction to conventional machine learning techniques
 
Machine Learning lecture4(logistic regression)
Machine Learning lecture4(logistic regression)Machine Learning lecture4(logistic regression)
Machine Learning lecture4(logistic regression)
 
44 randomized-algorithms
44 randomized-algorithms44 randomized-algorithms
44 randomized-algorithms
 
Svm map reduce_slides
Svm map reduce_slidesSvm map reduce_slides
Svm map reduce_slides
 

Recently uploaded

Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...srsj9000
 
Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝
Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝
Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝soniya singh
 
Software and Systems Engineering Standards: Verification and Validation of Sy...
Software and Systems Engineering Standards: Verification and Validation of Sy...Software and Systems Engineering Standards: Verification and Validation of Sy...
Software and Systems Engineering Standards: Verification and Validation of Sy...VICTOR MAESTRE RAMIREZ
 
microprocessor 8085 and its interfacing
microprocessor 8085  and its interfacingmicroprocessor 8085  and its interfacing
microprocessor 8085 and its interfacingjaychoudhary37
 
Sachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective IntroductionSachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective IntroductionDr.Costas Sachpazis
 
GDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentationGDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentationGDSCAESB
 
What are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxWhat are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxwendy cai
 
HARMONY IN THE HUMAN BEING - Unit-II UHV-2
HARMONY IN THE HUMAN BEING - Unit-II UHV-2HARMONY IN THE HUMAN BEING - Unit-II UHV-2
HARMONY IN THE HUMAN BEING - Unit-II UHV-2RajaP95
 
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVHARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVRajaP95
 
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130Suhani Kapoor
 
Microscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptxMicroscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptxpurnimasatapathy1234
 
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICSAPPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICSKurinjimalarL3
 
Heart Disease Prediction using machine learning.pptx
Heart Disease Prediction using machine learning.pptxHeart Disease Prediction using machine learning.pptx
Heart Disease Prediction using machine learning.pptxPoojaBan
 
power system scada applications and uses
power system scada applications and usespower system scada applications and uses
power system scada applications and usesDevarapalliHaritha
 
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130Suhani Kapoor
 
Past, Present and Future of Generative AI
Past, Present and Future of Generative AIPast, Present and Future of Generative AI
Past, Present and Future of Generative AIabhishek36461
 
Oxy acetylene welding presentation note.
Oxy acetylene welding presentation note.Oxy acetylene welding presentation note.
Oxy acetylene welding presentation note.eptoze12
 

Recently uploaded (20)

Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
 
Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝
Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝
Model Call Girl in Narela Delhi reach out to us at 🔝8264348440🔝
 
Software and Systems Engineering Standards: Verification and Validation of Sy...
Software and Systems Engineering Standards: Verification and Validation of Sy...Software and Systems Engineering Standards: Verification and Validation of Sy...
Software and Systems Engineering Standards: Verification and Validation of Sy...
 
🔝9953056974🔝!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...
🔝9953056974🔝!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...🔝9953056974🔝!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...
🔝9953056974🔝!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...
 
microprocessor 8085 and its interfacing
microprocessor 8085  and its interfacingmicroprocessor 8085  and its interfacing
microprocessor 8085 and its interfacing
 
Sachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective IntroductionSachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
 
GDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentationGDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentation
 
What are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxWhat are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptx
 
HARMONY IN THE HUMAN BEING - Unit-II UHV-2
HARMONY IN THE HUMAN BEING - Unit-II UHV-2HARMONY IN THE HUMAN BEING - Unit-II UHV-2
HARMONY IN THE HUMAN BEING - Unit-II UHV-2
 
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVHARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
 
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
 
Microscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptxMicroscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptx
 
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICSAPPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
 
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCRCall Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
 
Heart Disease Prediction using machine learning.pptx
Heart Disease Prediction using machine learning.pptxHeart Disease Prediction using machine learning.pptx
Heart Disease Prediction using machine learning.pptx
 
power system scada applications and uses
power system scada applications and usespower system scada applications and uses
power system scada applications and uses
 
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
 
Past, Present and Future of Generative AI
Past, Present and Future of Generative AIPast, Present and Future of Generative AI
Past, Present and Future of Generative AI
 
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
 
Oxy acetylene welding presentation note.
Oxy acetylene welding presentation note.Oxy acetylene welding presentation note.
Oxy acetylene welding presentation note.
 

Power systems and uncertainties revisited through games

  • 1. Power systems and uncertainties: revisited through games Intro: uncertainties in PS Power systems. Bias, bias correction: by resampling. Non-stochastic uncertainties: Nash method. If time enough, portfolio methods (?).
  • 2. 2 Power systems : key issues ● Real time: Network topology optimization ● Short term: When to use to hydroelectricity ? ● Long term : – Should we build big connections France-XXX ? – Should we build big wind farms ? ==> with X billions, what should I do ? Goal: build tools and answer these questions
  • 3. 3 Power systems : old and new ● Old: production → transmission → distribution ● New: all mixed + more uncertainties + renewable + storage - CO 2 ? + distributed + demand-side + HVDC ? ? ?
  • 4. 4 Power systems Argmax U argmax E reward investment techn. policy weather geopol.
  • 5. 5 Power systems Argmax U argmax E reward investment techn. policy weather geopol.
  • 6. 6 Power systems Argmax U argmax E reward investment techn. policy weather geopol. (+ load, faults, ...)
  • 7. 7 Power systems Argmax U argmax E reward investment techn. policy weather geopol.
  • 8. 8 Power systems Argmax U argmax E reward investment techn. policy weather geopol.
  • 9. 9 Power systems Argmax U argmax E reward investment techn. policy weather geopol.
  • 10. 10 Power systems Argmax U argmax E reward investment techn. policy weather geopol. Paral. stochastic optim.
  • 11. 11 Power systems Argmax U argmax E reward investment techn. policy weather geopol. Wald, Savage, Nash Paral. stochastic optim.
  • 12. 12 Power systems Argmax U argmax E reward investment techn. policy weather geopol. Wald, Savage, Nash Paral. stochastic optim. Discrete time control
  • 13. 13 Power systems Argmax U argmax E reward investment techn. policy weather geopol. Wald, Savage, Nash Paral. stochastic optim. Discrete time control Bias control
  • 14. 14 Power systems Argmax U argmax E reward investment techn. policy weather geopol. Wald, Savage, Nash Paral. stochastic optim. Discrete time control Bias control High dimensional M-estimation Model-selection ``derandomization''
  • 15. 15 Power systems Argmax U argmax E reward investment techn. policy weather geopol. Wald, Savage, Nash Paral. stochastic optim. Discrete time control Bias control High dimensional M-estimation Model-selection ``derandomization'' ``very'' open issue. 10 years ok. What about 40 years ?
  • 16. 16 Various uncertainties ● Data unavailable ● Stochastic – Weather <== average on data is (≈) ok – Faults <== average on data is not ok (otherwise many lines have fault probability zero) ● Non stochastic (truth hidden or partly adversarial) – Technology – Gas curtailment – CO2 penalization
  • 17. Power systems and uncertainties Bias correction Non stochastic uncertainties Noisy optimization When average on data; uncertainty = coverage When we have a model of stoc. uncertainties Oh my god how to deal with that ?
  • 18. Bias correction Maximum height, from a finite sample: The maximum height of 13 randomly drawn humans is less than the maximum height of all humans.
  • 19. Bias correction, more surprising 95% quantile of height, from a finite sample: The .95 quantile of height of 13 randomly drawn humans is less than the .95 quantile of all humans. R=[];for i=1:1000;x=randn(10000,1);q=empirical_inv(.95,x);aq=empirical_inv(.95,x(1:100));R=[R;q,aq];end; mean(R(:,1)>R(:,2))
  • 20. Bias correction, capacity ● Optimal capacity ? Optimal capacities = argmin E cost(x,random) approx. capacities = argmin ( cost(x,r1) + … + cost(x,rn) ) ● Also biased! Usually, approx. capacities << opt. capacities. ● The optimal capacity on average over a sample is less than the “real” optimal capacity. Surprising ?
  • 21. Bias correction by bootstrap, ideas Definitions: ● x0 = Real optimum, against the real (unknown) distribution ● x1 = Optimum against the sample ● x2 = Optimum against a resample of the sample Idea: ● x2 is to x1, what x1 is to x0. (Efron) ● So E(x2-x1) = E(x1-x0) (approx) ● So Ex0 = 2Ex1 - Ex2 (approx) ● So let us use 2x1 – Ex2 (expectation over resamplings) N among N
  • 22. Bias correction by bootstrap Standard approximate minimizes: Bootstrap correction: Classical approximation (twice!) Bootstrap Version.
  • 23. Bias correction by Jackknife (often better, for bias correction) Standard approximate minimizes: Jackknife correction: Optimum on all but one
  • 24. Cross validation ● For example, 10 scenarios (possible winters) ● You have an algorithm for choosing the optimal capacity. ● How to check that it works ? ● Bad solution: apply your algorithm on the 10 scenarios. ● Better solution: apply your algorithm on 9 scenarios, check on the 10th (and then rotate) ==> 100% mandatory in machine learning research ==> not widely used in capacity expansion planning
  • 25. Bias correction: many tricks ● Apply both the classical estimate, the new one, variants of the new one, and choose between them by cross-validation <== excellent, but expensive ● Average the bias correction over multiple homogeneous capacities ● Prefer the correction only if crossval predicts a clear success (at least 10%) <== safe Good trick when you want a new tool to be accepted by users
  • 27. Bias & bias correction Main take-home messages: – The cost of stochastic systems is usually underestimated (bias 1) – Optimal capacities are usually underestimated (bias 2) A simple alternate method: check by cross-validation if +1%, +2%, +4%, +8%, +16% are better than +0%
  • 28. Power systems and uncertainties Non stochastic uncertainties: Wald, Savage and Nash
  • 29. The problem: non probabilistic uncertainties We have plenty of options, there are plenty of possible scenarios. No probability + computing one entry takes time. What should we decide ? SCENARIOS OPTIONS
  • 30. Non stochastic uncertainties ● Uncertainties: – Stochastic: weather, load – Not stochastic: CO2 penalization, gas curtailment... (typically, scenario) ● Model: R(k,s) = reward if option k, scenario s ● How to deal with it ? – Scenario based: human checks all R(k,s), make a decision :-) – Wald: choose k maximizing min R(k,.) <== worst case – Savage: ● R'(k,s)= max R(.,s) – R(k,s) ● k minimizing max R'(k,.) <== best “regret” In both cases, this is optimal when nature decides after us and knows what we have chosen: ● Wald: Nature knows and hates us ● Savage: Nature wants us to have regrets
  • 31. Doesn't it look like a game ? We are ok for “Nature” to be adversarial, but not an opponent who knows our decision ==> Nature decides first, privately. 0 1 -1 -1 0 1 1 -1 0 Rock / paper/scissor Rock/paper/scissor
  • 32. Non stochastic uncertainties a (hard to read) proposal Nash: s (probabilistic) such that max E R(.,s) minimal eq. k (probabilistic!) such that min E R(k,.) maximal ==> worst case, for a Nature who does not know our decision ==> slightly philosophical, do you prefer Wald, Savage, or Nash, or a expert decision ?
  • 33. Non stochastic uncertainties clearer proposal (hopefully) Given: ● A list of “s” scenarios (possibly huge) ● A list of “k” options (possibly huge) ● A function R(.,.) Output: ● a sublist of scenarios (approx. Nash optimal), ● a sublist of options (approx. Nash optimal), ● probabilities on them ==> cheaper than Wald / Savage, and does selection Comp. Time ~ m log(m) where m= #K + #S
  • 34. K policies, S scenarios (no probabilities), how to make a decision ?
  • 35. Short version Intuitively cool: we naturally select a subset of scenarios / options, and Nature decides privately, before us. Practically cool: fast. Problem: output = stochastic decision (you decide nuclear power plants with a dice ?) SCENARIOS OPTIONS Selected subset
  • 36. Power systems and uncertainties WHEN UNCERTAINTIES ARE A STOCHASTIC MODEL Our experience: - noisy optimization is a nightmare - hard to find the best algorithm - don't choose: combine ==> portfolio
  • 37. Noisy optimization ● Two families of optimization principles – Optimizing in front of a (corrected) archive of possibilities – Optimizing in front of a stochastic model ● Pros/cons: – Archive = real data – Model = can deal with extreme events and new scenarios <== faults in networks (the death of the N-1 criterion ?) ==> this part, **robust** noisy optimization, thanks to portfolio methods
  • 38. Portfolio methods in noisy optimization I. Introduction: portfolios II.Portfolio methods III. Portfolio NOT in noisy optimization IV. Portfolio methods in noisy optimization
  • 39. Who is the best ? ==> portfolio methods (portfolio = pissing contest...) Alg. 1 Alg. 2 Alg. 3 Alg. 4 Games MC (Monte Carlo) MCTS Alphabeta DPS (direct policy search) Control PID MCTS Fuzzy Bellman-style Supervised ML on R^d NeuralNet SVM Fuzzy Decision trees Linear reg. Gen. linear regression ... Feature selection Feature ranking Embedded approaches Optimization Bandits UCB variants Uniform SR (successive reject) ==> The best algorithm is not always the same, by far. How to design robust algorithms ? FPS
  • 40. Who is the best ? (pissing contest...) Alg. 1 Alg. 2 Alg. 3 Alg. 4 Games MC (Monte Carlo) MCTS Alphabeta DPS (direct policy search) Control PID MCTS Fuzzy Bellman-style Supervised ML on R^d NeuralNet SVM Fuzzy Decision trees Linear reg. Gen. linear regression ... Feature selection Feature ranking Embedded approaches Optimization Bandits UCB variants Uniform SR (successive reject) ==> The best algorithm is not always the same, by far. How to design robust algorithms ? Chess
  • 41. Who is the best ? (pissing contest...) Alg. 1 Alg. 2 Alg. 3 Alg. 4 Games MC (Monte Carlo) MCTS Alphabeta DPS (direct policy search) Control PID MCTS Fuzzy Bellman-style Supervised ML on R^d NeuralNet SVM Fuzzy Decision trees Linear reg. Gen. linear regression ... Feature selection Feature ranking Embedded approaches Optimization Bandits UCB variants Uniform SR (successive reject) ==> The best algorithm is not always the same, by far. How to design robust algorithms ? Go, Hex
  • 42. Who is the best ? (pissing contest...) Alg. 1 Alg. 2 Alg. 3 Alg. 4 Games MC (Monte Carlo) MCTS Alphabeta DPS (direct policy search) Control PID MCTS Fuzzy Bellman-style Supervised ML on R^d NeuralNet SVM Fuzzy Decision trees Linear reg. Gen. linear regression ... Feature selection Feature ranking Embedded approaches Optimization Bandits UCB variants Uniform SR (successive reject) ==> The best algorithm is not always the same, by far. How to design robust algorithms ? Fog of war
  • 43. Who is the best ? (pissing contest...) Alg. 1 Alg. 2 Alg. 3 Alg. 4 Games MC (Monte Carlo) MCTS Alphabeta DPS (direct policy search) Control PID MCTS Fuzzy Bellman-style Supervised ML on R^d NeuralNet SVM Fuzzy Decision trees Linear reg. Gen. linear regression ... Feature selection Feature ranking Embedded approaches Optimization Bandits UCB variants Uniform SR (successive reject) ==> The best algorithm is not always the same, by far. How to design robust algorithms ? For publishing
  • 44. Who is the best ? (pissing contest...) Alg. 1 Alg. 2 Alg. 3 Alg. 4 Games MC (Monte Carlo) MCTS Alphabeta DPS (direct policy search) Control PID MCTS Fuzzy Bellman-style Supervised ML on R^d NeuralNet SVM Fuzzy Decision trees Linear reg. Gen. linear regression ... Feature selection Feature ranking Embedded approaches Optimization Bandits UCB variants Uniform SR (successive reject) ==> The best algorithm is not always the same, by far. How to design robust algorithms ? When it must work
  • 45. The design of robust algorithms ● Do not run only one algorithm. ● Run several algorithms, concurrently.
  • 46. The design of robust algorithms ● Do not run only one algorithm. ● Run several algorithms, concurrently. ● Pick up the best response.
  • 47. The design of robust algorithms ● Do not run only one algorithm. ● Run several algorithms, concurrently. ● Pick up the best response. if for all problems, out of 11 algorithms: ten of the algorithms need >= 40 hours to complete, but one algorithm completes in 10 minutes (not always the same)
  • 48. The design of robust algorithms ● Do not run only one algorithm. ● Run several algorithms, concurrently. ● Pick up the best response. if for all problems, out of 11 algorithms: ten of the algorithms need >= 40 hours to complete, but one algorithm completes in 10 minutes (not always the same) Then: ● Average time =110 minutes (10 minutes, multiplied by 11 for concur.) + no variance
  • 49. The design of robust algorithms ● Do not run only one algorithm. ● Run several algorithms, concurrently. ● Pick up the best response. if for all problems, out of 11 algorithms: ten of the algorithms need >= 40 hours to complete, but one algorithm completes in 10 minutes (not always the same) Then: ● Average time =110 minutes (10 minutes, multiplied by 11 for concur.) + no variance ● Whereas with sequential runs a.t. = 200 hours+10minutes + variance
  • 50. Portfolio methods: you already do it Usually, we compare algorithms offline in large datasets. Possible improvements: ● online comparison ? ● problem-specific comparison ? ● hybridization ? ● different budget / algo ?
  • 51. Portfolio methods: you already do it Usually, we compare algorithms offline in large datasets. Possible improvements: ● online comparison ? ● problem-specific comparison ? ● hybridization ? ● more budget for better solvers ==> this is portfolio
  • 52. Portfolio methods I. Introduction: ML and portfolios II.Portfolio methods: basics III. Portfolio NOT in noisy optimization IV. Portfolio methods in noisy optimization
  • 53. Glossary (1) ● Offline: we decide the best in advance (from related datasets) ● Online: dynamically
  • 54. Glossary (1) ● Offline: we decide the best in advance (from related datasets) ● Online: dynamically ● Fair: all algs use the same budget ● Unfair: more for the (seemingly) best
  • 55. Glossary (2) ● Sharing: algorithms can provide information to each other
  • 56. Glossary (2) ● Sharing: algorithms can provide information to each other ● Chaining: current best predictor / controller is used by all algorithms (implies that all algorithms have related representations)
  • 57. Glossary (2) ● Sharing: algorithms can provide information to each other ● Chaining: current best predictor / controller is used by all algorithms (implies that all algorithms have related representations) ● Parallel: each solver on (at least) one core
  • 58. Glossary (2) ● Sharing: algorithms can provide information to each other ● Chaining: current best predictor / controller is used by all algorithms (implies that all algorithms have related representations) ● Parallel: each solver on (at least) one core ● Anytime algorithm: performs well in spite of not knowing its computation time in advance
  • 59. Competence map ● Competence map = mapping (problem features PF, algorithm features AF ) ==> CM(PF,AF)=predicted performance ● Choose algorithm = argmax CM(PF,.)
  • 60. Portfolio methods I. Introduction: ML and portfolios II.Portfolio methods III. Portfolio NOT in noisy optimization IV. Portfolio methods in noisy optimization
  • 61. Most classical portfolio: combinatorial optimization ● Why combinatorial optimization ? – Huge difference between performances of algorithms (prob. dep.) – Checking who is the best is easy
  • 62. Most classical portfolio: combinatorial optimization ● Why combinatorial optimization ? – Huge difference between performances of algorithms (prob. dep.) – Checking who is the best is easy ● Results: – Stable, robust, competitive even against specialized tools in each category – Routinely wins SAT competitions (cfhttp://www.cs.ubc.ca/labs/beta/Projects/SATzilla/) ==> something has moved in AI :-) (not only deep networks :-) )
  • 63. 1. What does the state of the art tells us in portfolio ? No free lunch [Wolpert & many]: all algorithms perform equally on average on all optimization problems ==> so we can not find “one” optimal algorithm. Does this justify portfolio ? ● Yes, there is no optimal algorithm (so ==> portfolio...).
  • 64. 2. What does the state of the art tells us in portfolio ? No free lunch [Wolpert & many]: all algorithms perform equally on average on all optimization problems ==> so we can not find “one” optimal algorithm. Does this justify portfolio ? ● Yes, there is no optimal algorithm. ● No: even the portfolio is not better :-)
  • 65. 3. What does the state of the art tells us in portfolio ? Samulowitz and Memisevic: << use orthogonal solvers in your portfolio >> i.e. try not to have similar solvers in your portfolio
  • 66. 3. What does the state of the art tells us in portfolio ? Samulowitz and Memisevic: << use orthogonal solvers in your portfolio >> i.e. try not to have similar solvers in your portfolio ==> we will see a proof of this later.
  • 67. 4. What does the state of the art tells us in portfolio ? What if all solvers are indeed one solver with various parametrizations ? Then you have a structure, a metric, on your solvers ==> this is parameter tuning ==> bilevel optimization rather than really “portfolio”. ==> Here, unstructured family of solvers. ==> See “portfolio / parameter tuning” in [Kotthoff 2012]
  • 68. 5. What does the state of the art tells us in portfolio ? Unfair distribution of budget: ● 50% for best, 25% for second, 12.5% for third, etc. [Gagliolo, Schmidthuber '05'06]
  • 69. 5. What does the state of the art tells us in portfolio ? Unfair distribution of budget: ● 50% for best, 25% for second, 12.5% for third, etc. [Gagliolo, Schmidthuber '05'06] ● Finite time with fair budget, then only the best. [Pulina, Tacchella '09 + detailed comparison]
  • 70. 5. What does the state of the art tells us in portfolio ? Unfair distribution of budget: ● 50% for best, 25% for second, 12.5% for third, etc. [Gagliolo, Schmidthuber '05'06] ● Finite time with fair budget, then only the best. [Pulina, Tacchella '09 + detailed comparison] ● 90% for offline best, 10% for all others (just in case!) [Kadioglu et al, '11]
  • 71. 5. What does the state of the art tells us in portfolio ? Unfair distribution of budget: ● 50% for best, 25% for second, 12.5% for third, etc. [Gagliolo, Schmidthuber '05'06] ● Finite time with fair budget, then only the best. [Pulina, Tacchella '09 + detailed comparison] ● 90% for offline best, 10% for all others (just in case!) [Kadioglu et al, '11] ● Best first, even if fair (if not parallel).
  • 72. 6. What does the state of the art tells us in portfolio ? Parallelism is great & easy! Many experiments in [Hamadi 2013].
  • 73. 7. What does the state of the art tells us in portfolio ? Chaining & sharing: tricky but can work... we did not succeed much with that in ML :-( [see Vassilevska et al. 06]
  • 74. Portfolio methods I. Introduction: ML and portfolios II.Portfolio methods III. Portfolio NOT in ML (optim, bio) IV. Portfolio methods in noisy optimization
  • 75. Portfolio in noisy optimization Key difference: checking performance takes time & is uncertain: – hold out + test (supervised learning on dataset) – or random generation of new test cases (control) Comparing two candidates might take more time than generating them!
  • 76. Waow! How comparing can take more time than solving ? (Black-box case) Example: parametric controller ● Black-box f(x,w): test my controller – if I have parameter x; – if random processes = w – and returns the performance ● Randomized black-box f(x): – if I have parameter x – with w randomly drawn – and returns the performance Black box noisy optimization: ● I request f(x1),f(x2),...,f(xn) ● I propose x ● The real optimum is x* ● My regret is Rn=f(x*)-f(x) In many cases (papers by Fabian, Chen, Shamir) I get E Rn=O(1/n) i.e. budget = 1/precision whereas comparing xA and xB: ● Ef(xA) estimated std O(1/sqrt(n)) ● Ef(xB) estimated std O(1/sqrt(n)) ==> budget = 1/precision ! ! ! Why ? Should be easier! No, because in the first case we sample “far” from x* and “learn” the shape of E f
  • 77. Waow! How comparing can take more time than solving ? (White-box case) Example: parametric controller ● White-box f(x,w): test my controller – if I have parameter x; – if random processes = w – and returns the performance +the gradient ● Randomized black-box f(x): – if I have parameter x – with w randomly drawn – and returns the performance +the gradient Black box noisy optimization: ● I request f(x1),f(x2),...,f(xn) + grads ● I propose x ● The real optimum is x* ● My regret is Rn=f(x*)-f(x) ==> similar rates! whereas comparing xA and xB: ● Ef(xA) estimated std O(1/sqrt(n)) ● Ef(xB) estimated std O(1/sqrt(n)) ● ==> budget = 1/precision ! ! ! ==> still true!
  • 78. Key point ● There are many problems for which the natural criterion is noisy and leads to rates of the form regret = O(1/nα ) ● And the comparison is O(1/nβ ) ==> Noisy optimization, with or without gradients; or just with comparisons; ==> or many maximum likelihood / ERM / SRM rates ok, not always, but often :-) So let us consider this case.
  • 79. sn, rn and lag Many solutions for writing portfolios. In all cases, you have roughly these parameters:
  • 80. sn, rn and lag Many solutions for writing portfolios. In all cases, you have roughly these parameters: ● How frequently you compare
  • 81. sn, rn and lag Many solutions for writing portfolios. In all cases, you have roughly these parameters: ● How frequently you compare ● How precisely you compare (adaptive ? Maybe but no “race” (or you might spend a huge time) )
  • 82. s(n), r(n) and lag(u) Many solutions for writing portfolios. In all cases, you have roughly these parameters: ● How frequently you compare (r(n) = #evals at nth comparison) ● How precisely you compare (s(n) = budget for nth comparison) ● More surprising, who you compare: old outputs (lag(u) << u is the index of compared outputs) Because old outputs are cheaper to compare! Drawback: fair budget. ==> Lazy evaluation: do evaluations only when they are needed ● for comparison (cheap because lag(u)<<u) ● or for the output (cheap because only a few algos – the optimal ones)
  • 83. Almost readable theorem If solver i has regret (C(i)+o(1)) / nα(i) , and then, ● NOPA (noisy opt. portfolio alg.) has the log(m) shift ; ● and INOPA (improved Nopa) has the log(m') shift.
  • 84. Readable version: NOPA There exist universal parameters, such that ● if rates are f(xn)-f(x*) ~ 1/nα ● then NOPA just has a log(m) shift (m=#solvers) log(time) log(regret) Best solverBest solver Portfolio log(m)
  • 85. Readable version: INOPA There exist universal parameters, such that ● if rates are f(xn)-f(x*) ~ 1/nα ● then INOPA has a log(m') shift (m'=#optimal solvers) log(time) log(regret) Best solverBest solver Portfolio log(m')
  • 86. Readable version: INOPA There exist universal parameters, such that ● if rates are f(xn)-f(x*) ~ 1/nα ● then INOPA has a log(m') shift (m'=#optimal solvers) log(time) log(regret) Best solverBest solver Portfolio log(m') ORTHOGO NALITY ! ! !
  • 87. Conclusions ● Portfolio = revolution in comb. optim. – Simple – Compliant with parallelism – Robust (even against bug!) – Winning many competitions ● In noisy optimization – Can be more tricky, because comparing is expensive – But great for robustness, ML is so variable!
  • 88. Power systems and uncertainties Bias correction: a different path to robustness; easy to apply, but expensive (parallel); at least, check your bias (cross- validation) Scenarios: Nash = more complicated, but natural and selecting scenarios. But random recommendation :-) Noisy optimization: a new application for portfolio methods.
  • 91. Power systems / energy transition ● Public debate is very dirty ● Claims based on proprietary codes/studies (or no study at all...) ● It's so easy to cheat and get the numbers you want ==> duty: prefer open source / open data