nips 2016. bayesopt workshop invited talk
TRANSCRIPT
Multiobjective Bayesian
OptimizationJoshua [email protected] of Birmingham, UK
University of Manchester, UK (Honorary)
Boo brexit !
Minsky: Do your PhD on a topic no one else is working on
Minsky: Do your PhD on a topic no one else is working on
My topic: (Pareto) multiobjective optimization; not many had done much in 1997
By 2005/6: Many people were working on stochastic search for multiobjective problems. So, I looked at “Bayesian” approaches for scalar optimization and adapted them -> ParEGO. I also had a need...
Motivation: automation of science experiments
Mass spectrometers optimized by ParEGO were used in the HUSERMET project, a large study of human blood serum in health and disease with over 800 patient subjects and performed in collaboration with GlaxoSmithKline, AstraZeneca, Stockport NHS Trust and others (see References)
EVE - University of ManchesterKing, Ross D., et al. "Functional genomic hypothesis generation and
experimentation by a robot scientist." Nature 427.6971 (2004): 247-252.
Further motivation
Not the best car on the grid any more. But when it was, it was downto aerodynamics optimized in a wind-tunnel.
Multiobjective optimization
Darwin Updated: Pareto solutions in design space
Adapted species lie in low-dimensional manifolds in feature space!!
Visualization of such patterns aids designers and engineers (cf. Deb)
Figures: from Shoval et al, Science 336, 2012
ParEGOKnowles, 2005; 2006
•A simple adaptation of Jones et al’s seminal* EGO method (1998)•Developed rapidly for real applications•One DACE model and scalarization•Several weaknesses•But nevertheless quite popular and used in applications
*Mockus and Zilinskas had had similar ideas considerably earlier, than Jones et al, but EGO put it all together
ParEGOKnowles, 2005; 2006
•A simple adaptation of Jones et al’s seminal* EGO method (1998)•Developed rapidly for real applications•One DACE model and scalarization•Several weaknesses•But nevertheless quite popular and used in applications
*Mockus and Zilinskas had had similar ideas considerably earlier, than Jones et al, but EGO put it all together
Antenna optimization with ParEGO
The State of the Artin MCDM
SwarmOptimiser(say)
DM interacts withand steers search.WHY?EVIDENCE????
What’s new since 2006?• Handling of noisy samples (Hughes & Knowles, 2007)
• Ephemeral resource constraints (Allmendinger & Knowles, 2010)
• Decision-making during search (Hakanen & Knowles, 2017)
• Machine decision makers (Lopez-Ibanez & Knowles, 2015)
• Many-objective, robust optimization (Purshouse et al; forthcoming)
• Benchmarks for all the above (Working group at 2016 Lorentz centre workshop; forthcoming)
What’s new since 2006?• Handling of noisy samples (Hughes & Knowles, 2007)
• Ephemeral resource constraints (Allmendinger & Knowles, 2010)
• Decision-making during search (Hakanen & Knowles, 2017)
• Machine decision makers (Lopez-Ibanez & Knowles, 2015)
• Many-objective, robust optimization (Purshouse et al; forthcoming)
• Benchmarks for all the above (Working group at 2016 Lorentz centre workshop; forthcoming)
Ephemeral resource constraints
In experimental work (c. 2008), we discovered a new kind of constraint that we call:
ephemeral resource constraints
Richard’s whole PhD was about handling these things, because no one else was doing this! (Minsky again)Allmendinger, Richard, and Joshua Knowles. "On handling ephemeral resource constraints in evolutionary search." Evolutionary computation 21.3 (2013): 497-531.Allmendinger, Richard, and Joshua Knowles. Ephemeral resource constraints in optimization and their effects on evolutionary search. Technical Report MLO-20042010, University of Manchester, 2010.Allmendinger, Richard, and Joshua Knowles. "On-line purchasing strategies for an evolutionary algorithm performing resource-constrained optimization." International Conference on Parallel Problem Solving from Nature. Springer Berlin Heidelberg, 2010.Allmendinger, Richard, and Joshua Knowles. "Policy learning in resource-constrained optimization." Proceedings of the 13th annual conference on Genetic and evolutionary computation. ACM, 2011.
Overview
= Ephemeral-Resource-Constrained Optimization Problem (ERCOP)
+
40 Years Earlier...
Conic rings were not always available in the size demanded by the Evolution Strategy.Low-tech Solution: order rings and wait ‘idly’ until arrival
Schwefel optimized jet nozzles experimentally (1970)
Ephemeral resource constraints
We have not been Bayesian about this at all so far. We did some reinforcement learning approaches (tedious to train but we found good generalization). And some other heuristics!
We think this could be a rich vein, however.
Benchmarking Requirements
Tests for multiobjective surrogate-assisted methods and Bayesian optimization
NB: The following slides are edits of slides jointly written originally by Tea Tusar, Ilya Loschilov, Boris Naujoks, Daniel Horn, Dimo Brockhoff and Joshua Knowles, as part of a seminar presentation at the Lorentz Center, Leiden, NL, in March 2016
Compared to what?When do we expect Bayesian optimization methods to be uncompetitive?
How do we select the right method to benchmark against?
There have been some nice collaborative benchmarking initiatives in recent years. One of the well known ones is the BBOB – the Black-box optimization benchmarking framework.
Benchmarking purpose
Benchmarking = Functions + Settings + Performance measures
+ Implementation issues
Q. How can we extend current benchmarks to be useful for surrogate-assisted and MO development? Answer: focus on “settings” for the first time
Benchmarking framework (BBOB)
24 continuous functions in 5 different categories, 15 instances per function
Separable, moderate, ill-conditioned, multimodal (w. / without global structure)
Next BBOB: bi-objective55 functions, 5 instances per functionMixture of classes described aboveAnytime performance (from 1 to millions of f.e.) Measured with hypervolume
Anytime, multi-objective
Hyp
ervo
lum
e
Proposed New SettingsTemporal aspects
On real-world benchmarks
Starting from and improving existing solutions
Pareto front prediction (without solutions in decision space)
Mixed-integer
Noise on objective values? (Not new to BBOB)
Constraints.
Report on runtime (wall clock)
Temporal AspectsParallel evaluation (aka batch)
At different fixed budgets
Heterogeneous evaluation time (per objective)
Optimizers that may be used• Large batch size
DoE designs, latin hypercube, space- filling, random search. These are non-adaptive
• Flexible batch size EA, multipoint surrogates
• Sequential algorithms EGO, Bayesian optimization
Improving Existing Solutions
Motivation. Practitioners often start from existing solutions, provided from an extrinsic source. Whereas in EMO, we often start from scratch
Implementation. We provide some initial sub-optimal solutions
Research questions How much do methods differ in their ability to improve solutions quickly?How does this differ with the type of solution provided, e.g. local optima, well-spread solutions
Pareto front prediction Motivation
Finding bounds is classical optimization goal
In MCDM, the decision maker is interested by the potential for improvement. Can use this for interactively steering
It can provide stopping criteria (particularly important in expensive settings) Implementation:
Optimizer must provide prediction of Pareto front – a fixed number of points (at any time)
Inspired by prediction of PFs by Mickael Binois
ConclusionsClaim: Benchmarking frameworks such as BBOB stimulate large-scale comparison studies that improve understanding and development of methods
We have identified settings we believe will extend MO benchmarking usefully for Bayesian optimization (expensive MO optimization) developers and practitioners
LOOK OUT for our forthcoming EMO paper ;-(
ThanksThanks for your attention!
Thanks very much to the organizers, and those who moved their talks for me
Thanks to a long list of collaborators and forerunners, who can be found on my webpages, and of course cited in papers
http://www.cs.bham.ac.uk/~jdk