nips 2016. bayesopt workshop invited talk

30
Multiobjective Bayesian Optimization Joshua Knowles [email protected] University of Birmingham, UK University of Manchester, UK (Honorary)

Upload: joshua-knowles

Post on 16-Apr-2017

686 views

Category:

Science


4 download

TRANSCRIPT

Page 1: NIPS 2016.  BayesOpt workshop invited talk

Multiobjective Bayesian

OptimizationJoshua [email protected] of Birmingham, UK

University of Manchester, UK (Honorary)

Page 2: NIPS 2016.  BayesOpt workshop invited talk

Boo brexit !

Page 3: NIPS 2016.  BayesOpt workshop invited talk

Minsky: Do your PhD on a topic no one else is working on

Page 4: NIPS 2016.  BayesOpt workshop invited talk

Minsky: Do your PhD on a topic no one else is working on

My topic: (Pareto) multiobjective optimization; not many had done much in 1997

By 2005/6: Many people were working on stochastic search for multiobjective problems. So, I looked at “Bayesian” approaches for scalar optimization and adapted them -> ParEGO. I also had a need...

Page 5: NIPS 2016.  BayesOpt workshop invited talk

Motivation: automation of science experiments

Mass spectrometers optimized by ParEGO were used in the HUSERMET project, a large study of human blood serum in health and disease with over 800 patient subjects and performed in collaboration with GlaxoSmithKline, AstraZeneca, Stockport NHS Trust and others (see References)

Page 6: NIPS 2016.  BayesOpt workshop invited talk

EVE - University of ManchesterKing, Ross D., et al. "Functional genomic hypothesis generation and

experimentation by a robot scientist." Nature 427.6971 (2004): 247-252.

Page 7: NIPS 2016.  BayesOpt workshop invited talk

Further motivation

Not the best car on the grid any more. But when it was, it was downto aerodynamics optimized in a wind-tunnel.

Page 8: NIPS 2016.  BayesOpt workshop invited talk

Multiobjective optimization

Page 9: NIPS 2016.  BayesOpt workshop invited talk

Darwin Updated: Pareto solutions in design space

Adapted species lie in low-dimensional manifolds in feature space!!

Visualization of such patterns aids designers and engineers (cf. Deb)

Figures: from Shoval et al, Science 336, 2012

Page 10: NIPS 2016.  BayesOpt workshop invited talk

ParEGOKnowles, 2005; 2006

•A simple adaptation of Jones et al’s seminal* EGO method (1998)•Developed rapidly for real applications•One DACE model and scalarization•Several weaknesses•But nevertheless quite popular and used in applications

*Mockus and Zilinskas had had similar ideas considerably earlier, than Jones et al, but EGO put it all together

Page 11: NIPS 2016.  BayesOpt workshop invited talk

ParEGOKnowles, 2005; 2006

•A simple adaptation of Jones et al’s seminal* EGO method (1998)•Developed rapidly for real applications•One DACE model and scalarization•Several weaknesses•But nevertheless quite popular and used in applications

*Mockus and Zilinskas had had similar ideas considerably earlier, than Jones et al, but EGO put it all together

Page 12: NIPS 2016.  BayesOpt workshop invited talk

Antenna optimization with ParEGO

Page 13: NIPS 2016.  BayesOpt workshop invited talk

The State of the Artin MCDM

SwarmOptimiser(say)

DM interacts withand steers search.WHY?EVIDENCE????

Page 14: NIPS 2016.  BayesOpt workshop invited talk

What’s new since 2006?• Handling of noisy samples (Hughes & Knowles, 2007)

• Ephemeral resource constraints (Allmendinger & Knowles, 2010)

• Decision-making during search (Hakanen & Knowles, 2017)

• Machine decision makers (Lopez-Ibanez & Knowles, 2015)

• Many-objective, robust optimization (Purshouse et al; forthcoming)

• Benchmarks for all the above (Working group at 2016 Lorentz centre workshop; forthcoming)

Page 15: NIPS 2016.  BayesOpt workshop invited talk

What’s new since 2006?• Handling of noisy samples (Hughes & Knowles, 2007)

• Ephemeral resource constraints (Allmendinger & Knowles, 2010)

• Decision-making during search (Hakanen & Knowles, 2017)

• Machine decision makers (Lopez-Ibanez & Knowles, 2015)

• Many-objective, robust optimization (Purshouse et al; forthcoming)

• Benchmarks for all the above (Working group at 2016 Lorentz centre workshop; forthcoming)

Page 16: NIPS 2016.  BayesOpt workshop invited talk

Ephemeral resource constraints

In experimental work (c. 2008), we discovered a new kind of constraint that we call:

ephemeral resource constraints

Richard’s whole PhD was about handling these things, because no one else was doing this! (Minsky again)Allmendinger, Richard, and Joshua Knowles. "On handling ephemeral resource constraints in evolutionary search." Evolutionary computation 21.3 (2013): 497-531.Allmendinger, Richard, and Joshua Knowles. Ephemeral resource constraints in optimization and their effects on evolutionary search. Technical Report MLO-20042010, University of Manchester, 2010.Allmendinger, Richard, and Joshua Knowles. "On-line purchasing strategies for an evolutionary algorithm performing resource-constrained optimization." International Conference on Parallel Problem Solving from Nature. Springer Berlin Heidelberg, 2010.Allmendinger, Richard, and Joshua Knowles. "Policy learning in resource-constrained optimization." Proceedings of the 13th annual conference on Genetic and evolutionary computation. ACM, 2011.

Page 17: NIPS 2016.  BayesOpt workshop invited talk

Overview

= Ephemeral-Resource-Constrained Optimization Problem (ERCOP)

+

Page 18: NIPS 2016.  BayesOpt workshop invited talk

40 Years Earlier...

Conic rings were not always available in the size demanded by the Evolution Strategy.Low-tech Solution: order rings and wait ‘idly’ until arrival

Schwefel optimized jet nozzles experimentally (1970)

Page 19: NIPS 2016.  BayesOpt workshop invited talk

Ephemeral resource constraints

We have not been Bayesian about this at all so far. We did some reinforcement learning approaches (tedious to train but we found good generalization). And some other heuristics!

We think this could be a rich vein, however.

Page 20: NIPS 2016.  BayesOpt workshop invited talk

Benchmarking Requirements

Tests for multiobjective surrogate-assisted methods and Bayesian optimization

NB: The following slides are edits of slides jointly written originally by Tea Tusar, Ilya Loschilov, Boris Naujoks, Daniel Horn, Dimo Brockhoff and Joshua Knowles, as part of a seminar presentation at the Lorentz Center, Leiden, NL, in March 2016

Page 21: NIPS 2016.  BayesOpt workshop invited talk

Compared to what?When do we expect Bayesian optimization methods to be uncompetitive?

How do we select the right method to benchmark against?

There have been some nice collaborative benchmarking initiatives in recent years. One of the well known ones is the BBOB – the Black-box optimization benchmarking framework.

Page 22: NIPS 2016.  BayesOpt workshop invited talk

Benchmarking purpose

Benchmarking = Functions + Settings + Performance measures

+ Implementation issues

Q. How can we extend current benchmarks to be useful for surrogate-assisted and MO development? Answer: focus on “settings” for the first time

Page 23: NIPS 2016.  BayesOpt workshop invited talk

Benchmarking framework (BBOB)

24 continuous functions in 5 different categories, 15 instances per function

Separable, moderate, ill-conditioned, multimodal (w. / without global structure)

Next BBOB: bi-objective55 functions, 5 instances per functionMixture of classes described aboveAnytime performance (from 1 to millions of f.e.) Measured with hypervolume

Page 24: NIPS 2016.  BayesOpt workshop invited talk

Anytime, multi-objective

Hyp

ervo

lum

e

Page 25: NIPS 2016.  BayesOpt workshop invited talk

Proposed New SettingsTemporal aspects

On real-world benchmarks

Starting from and improving existing solutions

Pareto front prediction (without solutions in decision space)

Mixed-integer

Noise on objective values? (Not new to BBOB)

Constraints.

Report on runtime (wall clock)

Page 26: NIPS 2016.  BayesOpt workshop invited talk

Temporal AspectsParallel evaluation (aka batch)

At different fixed budgets

Heterogeneous evaluation time (per objective)

Optimizers that may be used• Large batch size

DoE designs, latin hypercube, space- filling, random search. These are non-adaptive

• Flexible batch size EA, multipoint surrogates

• Sequential algorithms EGO, Bayesian optimization

Page 27: NIPS 2016.  BayesOpt workshop invited talk

Improving Existing Solutions

Motivation. Practitioners often start from existing solutions, provided from an extrinsic source. Whereas in EMO, we often start from scratch

Implementation. We provide some initial sub-optimal solutions

Research questions How much do methods differ in their ability to improve solutions quickly?How does this differ with the type of solution provided, e.g. local optima, well-spread solutions

Page 28: NIPS 2016.  BayesOpt workshop invited talk

Pareto front prediction Motivation

Finding bounds is classical optimization goal

In MCDM, the decision maker is interested by the potential for improvement. Can use this for interactively steering

It can provide stopping criteria (particularly important in expensive settings) Implementation:

Optimizer must provide prediction of Pareto front – a fixed number of points (at any time)

Inspired by prediction of PFs by Mickael Binois

Page 29: NIPS 2016.  BayesOpt workshop invited talk

ConclusionsClaim: Benchmarking frameworks such as BBOB stimulate large-scale comparison studies that improve understanding and development of methods

We have identified settings we believe will extend MO benchmarking usefully for Bayesian optimization (expensive MO optimization) developers and practitioners

LOOK OUT for our forthcoming EMO paper ;-(

Page 30: NIPS 2016.  BayesOpt workshop invited talk

ThanksThanks for your attention!

Thanks very much to the organizers, and those who moved their talks for me

Thanks to a long list of collaborators and forerunners, who can be found on my webpages, and of course cited in papers

http://www.cs.bham.ac.uk/~jdk