Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

NIPS 2016. BayesOpt workshop invited talk.

938 views

Published on

Multiobjective Bayesian Optimization, aka surrogate modeling etc. What we did back in 2005/6, and what we are up to now, with encouragement to the audience to take part. Cheers!

Published in: Science
  • Be the first to comment

NIPS 2016. BayesOpt workshop invited talk.

  1. 1. Multiobjective Bayesian Optimization Joshua Knowles j.knowles@cs.bham.ac.uk University of Birmingham, UK University of Manchester, UK (Honorary)
  2. 2. Boo brexit !
  3. 3. Minsky: Do your PhD on a topic no one else is working on
  4. 4. Minsky: Do your PhD on a topic no one else is working on My topic: (Pareto) multiobjective optimization; not many had done much in 1997 By 2005/6: Many people were working on stochastic search for multiobjective problems. So, I looked at “Bayesian” approaches for scalar optimization and adapted them -> ParEGO. I also had a need...
  5. 5. Motivation: automation of science experiments Mass spectrometers optimized by ParEGO were used in the HUSERMET project, a large study of human blood serum in health and disease with over 800 patient subjects and performed in collaboration with GlaxoSmithKline, AstraZeneca, Stockport NHS Trust and others (see References)
  6. 6. EVE - University of Manchester King, Ross D., et al. "Functional genomic hypothesis generation and experimentation by a robot scientist." Nature 427.6971 (2004): 247-252.
  7. 7. Further motivation Not the best car on the grid any more. But when it was, it was down to aerodynamics optimized in a wind-tunnel.
  8. 8. Multiobjective optimization
  9. 9. Darwin Updated: Pareto solutions in design space Adapted species lie in low-dimensional manifolds in feature space!! Visualization of such patterns aids designers and engineers (cf. Deb) Figures: from Shoval et al, Science 336, 2012
  10. 10. ParEGO Knowles, 2005; 2006 •A simple adaptation of Jones et al’s seminal* EGO method (1998) •Developed rapidly for real applications •One DACE model and scalarization •Several weaknesses •But nevertheless quite popular and used in applications *Mockus and Zilinskas had had similar ideas considerably earlier, than Jones et al, but EGO put it all together
  11. 11. ParEGO Knowles, 2005; 2006 •A simple adaptation of Jones et al’s seminal* EGO method (1998) •Developed rapidly for real applications •One DACE model and scalarization •Several weaknesses •But nevertheless quite popular and used in applications *Mockus and Zilinskas had had similar ideas considerably earlier, than Jones et al, but EGO put it all together
  12. 12. Antenna optimization with ParEGO
  13. 13. The State of the Art in MCDM Swarm Optimiser (say) DM interacts with and steers search. WHY? EVIDENCE????
  14. 14. What’s new since 2006? • Handling of noisy samples (Hughes & Knowles, 2007) • Ephemeral resource constraints (Allmendinger & Knowles, 2010) • Decision-making during search (Hakanen & Knowles, 2017) • Machine decision makers (Lopez-Ibanez & Knowles, 2015) • Many-objective, robust optimization (Purshouse et al; forthcoming) • Benchmarks for all the above (Working group at 2016 Lorentz centre workshop; forthcoming)
  15. 15. What’s new since 2006? • Handling of noisy samples (Hughes & Knowles, 2007) • Ephemeral resource constraints (Allmendinger & Knowles, 2010) • Decision-making during search (Hakanen & Knowles, 2017) • Machine decision makers (Lopez-Ibanez & Knowles, 2015) • Many-objective, robust optimization (Purshouse et al; forthcoming) • Benchmarks for all the above (Working group at 2016 Lorentz centre workshop; forthcoming)
  16. 16. Ephemeral resource constraints In experimental work (c. 2008), we discovered a new kind of constraint that we call: ephemeral resource constraints Richard’s whole PhD was about handling these things, because no one else was doing this! (Minsky again) Allmendinger, Richard, and Joshua Knowles. "On handling ephemeral resource constraints in evolutionary search." Evolutionary computation 21.3 (2013): 497-531. Allmendinger, Richard, and Joshua Knowles. Ephemeral resource constraints in optimization and their effects on evolutionary search. Technical Report MLO-20042010, University of Manchester, 2010. Allmendinger, Richard, and Joshua Knowles. "On-line purchasing strategies for an evolutionary algorithm performing resource-constrained optimization." International Conference on Parallel Problem Solving from Nature. Springer Berlin Heidelberg, 2010. Allmendinger, Richard, and Joshua Knowles. "Policy learning in resource-constrained optimization." Proceedings of the 13th annual conference on Genetic and evolutionary computation. ACM, 2011.
  17. 17. Overview = Ephemeral-Resource-Constrained Optimization Problem (ERCOP) +
  18. 18. 40 Years Earlier... Conic rings were not always available in the size demanded by the Evolution Strategy. Low-tech Solution: order rings and wait ‘idly’ until arrival Schwefel optimized jet nozzles experimentally (1970)
  19. 19. Ephemeral resource constraints We have not been Bayesian about this at all so far. We did some reinforcement learning approaches (tedious to train but we found good generalization). And some other heuristics! We think this could be a rich vein, however.
  20. 20. Benchmarking Requirements Tests for multiobjective surrogate-assisted methods and Bayesian optimization NB: The following slides are edits of slides jointly written originally by Tea Tusar, Ilya Loschilov, Boris Naujoks, Daniel Horn, Dimo Brockhoff and Joshua Knowles, as part of a seminar presentation at the Lorentz Center, Leiden, NL, in March 2016
  21. 21. Compared to what? When do we expect Bayesian optimization methods to be uncompetitive? How do we select the right method to benchmark against? There have been some nice collaborative benchmarking initiatives in recent years. One of the well known ones is the BBOB – the Black-box optimization benchmarking framework.
  22. 22. Benchmarking purpose Benchmarking = Functions + Settings + Performance measures + Implementation issues Q. How can we extend current benchmarks to be useful for surrogate-assisted and MO development? Answer: focus on “settings” for the first time
  23. 23. Benchmarking framework (BBOB) 24 continuous functions in 5 different categories, 15 instances per function Separable, moderate, ill-conditioned, multimodal (w. / without global structure) Next BBOB: bi-objective 55 functions, 5 instances per function Mixture of classes described above Anytime performance (from 1 to millions of f.e.) Measured with hypervolume
  24. 24. Anytime, multi-objective Hypervolume
  25. 25. Proposed New Settings Temporal aspects On real-world benchmarks Starting from and improving existing solutions Pareto front prediction (without solutions in decision space) Mixed-integer Noise on objective values? (Not new to BBOB) Constraints. Report on runtime (wall clock)
  26. 26. Temporal Aspects Parallel evaluation (aka batch) At different fixed budgets Heterogeneous evaluation time (per objective) Optimizers that may be used • Large batch size DoE designs, latin hypercube, space- filling, random search. These are non-adaptive • Flexible batch size EA, multipoint surrogates • Sequential algorithms EGO, Bayesian optimization
  27. 27. Improving Existing Solutions Motivation. Practitioners often start from existing solutions, provided from an extrinsic source. Whereas in EMO, we often start from scratch Implementation. We provide some initial sub-optimal solutions Research questions How much do methods differ in their ability to improve solutions quickly? How does this differ with the type of solution provided, e.g. local optima, well-spread solutions
  28. 28. Pareto front prediction Motivation Finding bounds is classical optimization goal In MCDM, the decision maker is interested by the potential for improvement. Can use this for interactively steering It can provide stopping criteria (particularly important in expensive settings) Implementation: Optimizer must provide prediction of Pareto front – a fixed number of points (at any time) Inspired by prediction of PFs by Mickael Binois
  29. 29. Conclusions Claim: Benchmarking frameworks such as BBOB stimulate large-scale comparison studies that improve understanding and development of methods We have identified settings we believe will extend MO benchmarking usefully for Bayesian optimization (expensive MO optimization) developers and practitioners LOOK OUT for our forthcoming EMO paper ;-(
  30. 30. Thanks Thanks for your attention! Thanks very much to the organizers, and those who moved their talks for me Thanks to a long list of collaborators and forerunners, who can be found on my webpages, and of course cited in papers http://www.cs.bham.ac.uk/~jdk

×