lenzerheide,january 4-7 2016. - drexel universitymwl25/mcmskiv/mcmskiv booklet.pdf · 2016. 1....

57

Upload: others

Post on 25-Aug-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Lenzerheide,January 4-7 2016. - Drexel Universitymwl25/mcmskiV/McmskiV booklet.pdf · 2016. 1. 4. · This booklet contains the abstracts of the 6th IMS-ISBA joint meeting, Bayes
Page 2: Lenzerheide,January 4-7 2016. - Drexel Universitymwl25/mcmskiV/McmskiV booklet.pdf · 2016. 1. 4. · This booklet contains the abstracts of the 6th IMS-ISBA joint meeting, Bayes

ii

Page 3: Lenzerheide,January 4-7 2016. - Drexel Universitymwl25/mcmskiV/McmskiV booklet.pdf · 2016. 1. 4. · This booklet contains the abstracts of the 6th IMS-ISBA joint meeting, Bayes

This booklet contains the abstracts of the 6th IMS-ISBA joint meeting, Bayes Comp at MCMSki V, held inLenzerheide, January 4-7 2016.The booklet is organized as follows:• Plenary speakers• Tutorials• Invited sessions• Contributed sessions• Breaking News!• Posters (in alphabetical order)

In case of multiple authorships, known speakers are highlighted in boldface.We hope you will enjoy the scientific as well as the entertainment part of the program.

The Organizing CommetteeBrad Carlin, Antonietta Mira, and Christian Robert (chairs)Cecilia Aquila, Federica Bianchi, Alberto Caimo, Chiara Legnazzi, Merrill Liechty, and Filippo Macaluso

Free WIFI:Network: FreeHotelSchweizerhofUsername: 2585660918Password: 6421In the conference room there is a 2nd WIFI (no username/password required)Network: Zyxel

1

Page 4: Lenzerheide,January 4-7 2016. - Drexel Universitymwl25/mcmskiV/McmskiV booklet.pdf · 2016. 1. 4. · This booklet contains the abstracts of the 6th IMS-ISBA joint meeting, Bayes

PROGRAMMonday, Jan. 4 ACTIVITY SPEAKER(S) TITLE ORGANIZERS14:00 -17:00 Registration - Schweizerhof lobby17:00 - 18:00 Plenary talk - Plenum Steve Scott Cloudy with a Chance of Bayes18:00 - 19:00 Welcoming reception at theAufenthalmstraum offered by CSCS19.00 - 20:00 Round table discussion - Plenum Thomas Schulthess PANELISTS:Data Science in The Next 50 Years Michael Jordan D. Draper, H. K§nsch, A. LeeC. Robert, G. Roberts, A. Sepe, M. Troyer20:00 - 20:30 Outdoor activities presentation20:45 - 22:00 Guided hike in the woodTuesday, Jan. 5

08:00 - 09:00 Registration - Schweizerhof lobby08:30Ê- 09:15 Plenary talk - Plenum Michael Jordan On the Computational Complexity ofHigh-dimensional Bayesian Variable Selection09:15 - 10:45 Sessions in parallel 3 speakersInvited - Plenum I. session n. 4 Bayesian Molecular Biology A. Frigessi and C. Di SerioContributed - Activityraum C. session n. 8 Modeling and Computing with Latent Ê Peter MuellerFeature Models and Repulsive Point ProcessesÊ10:45 - 11:00 Coffee break11:00 - 13:00 Tutorial - Plenum Mike Betancourt STAN13:00 - 15e:30 Lunch break15:30 - 16:15 Plenary talk - Plenum Tony Lelièvre Computational Challenges in Molecular Dynamics16:15 -17:45 Sessions in parallel 4 speakersInvited - Plenum I. session n. 1 Hamiltonian Monte Carlo Michael BetancourtContributed - Activityraum C. session n. 4 Bayesian Computation for Spatiotemporal Models Galin Jones17:45 - 18:15 Coffee break18:15 - 19:45 Session and Breaking News in parallel 6 and 4 speakersBreaking News! - Plenum 6 Breaking News talks L. Bornn, J. Cockayne, G. FortM. Gutmann, J.M. Marin, A. NoretsContributed - Activityraum C. session n. 5 Recent Developments in James FlegalMarkov Chain Monte Carlo Methodology21:00 - 23:30 Posters - Aufenthalmstraum From Aquila to Norets

2

Page 5: Lenzerheide,January 4-7 2016. - Drexel Universitymwl25/mcmskiV/McmskiV booklet.pdf · 2016. 1. 4. · This booklet contains the abstracts of the 6th IMS-ISBA joint meeting, Bayes

Wednesday, Jan. 6 ACTIVITY SPEAKER(S) TITLE ORGANIZERS08:00 - 09:00 Registration - Schweizerhof lobby08:30Ê- 09:15 Plenary talk - Plenum David Dunson Is MCMC Dead?09:15 - 10:45 Sessions in parallel 3 speakersInvited - Plenum I. session n. 6 High-Dimensional MCMC Gareth RobertsContributed - Activityraum C. session n. 10 Model Selection and Advanced Donatello TelescaScientific Computation10:45 - 11:00 Coffee break11:00 - 15:30 Tweedie Ski Cup Dietshen ski run15 min walk from hotelStarting time: noonAt the end of the ski race: mulledwine and a taste of local food15:30 -17:00 Sessions in parallel 3 speakersInvited - Plenum I. session n. 5 Algorithms for Intractable ProblemsÊ N. Friel, K. MengersenInvited - Activityraum I. session n. 7 Uncertainty Quantification in Mathematical Models Patrick R. Conrad17:00 - 17:30 Coffee break17:30 - 19:00 Sessions in parallel 4 speakersInvited - Plenum I. session n. 3 Bayesian Nonparametrics T. Broderick, ÊI. PruensterContributed - Activityraum C. session n. 1 Exact Techniques Krys Latuszynskiin Monte Carlo Sampling and Inference19:00 - 19:45 Breaking news sessionsBreaking News! - Plenum 3 Breaking News talks M. Pollack, M. Rabinovich, Y. WangBreaking News! - Activityraum 3 Breaking News talks R. Steorts, A. Terenin, G. Zanella21:00 - 23:30 Posters - Aufenthalmstraum From Pollack to Zanella

3

Page 6: Lenzerheide,January 4-7 2016. - Drexel Universitymwl25/mcmskiV/McmskiV booklet.pdf · 2016. 1. 4. · This booklet contains the abstracts of the 6th IMS-ISBA joint meeting, Bayes

Thursday, Jan. 7 ACTIVITY SPEAKER(S) TITLE ORGANIZERS08:30Ê- 09:15 Plenary talk - Plenum Krys Latuszynski Exact Inference for DiffusionModels and Related MCMC Methodology09:15 - 10:45 Sessions in parallel 4 speakersInvited - Plenum C. session n. 3 Bayesian Inference for Big Environmental Data Dorit HammerlingContributed - Activityraum C. session n. 12 Advances in Adaptive MCMC Radu Craiu10:45 - 11:00 Coffee break11:00 - 13:00 Tutorial - Activityraum Art Owen QMC13:00 - 14:45 Lunch break14:45 - 16:15 Sessions in parallel 4 speakersContributed - Plenum C. session n. 7 Recent Approximate MCMC AlgorithmsÊ P. Jenkins and A. JohansenContributed - Activityraum C. session n. 6 Recent Advances in Sequential Monte Carlo Anthony Lee16:15 - 16:45 Coffee break16:45 - 18:15 Sessions in parallel 4 speakersContributed - Plenum C. session n. 9 Computational Aspects in Bayesian Nonparametrics Antonio LijoiInvited - Activityraum I. session n.2 QMC Nicolas Chopin18:15 - 19:45 Sessions in parallel 4 speakersContributed - Plenum C. session n. 11 Recent Advances in Variational Bayesian Methods Tamara BroderickContributed - Activityraum C. session n. 2 Probabilistic Numerics: Integrating M. Osborne, C. Oates, F. BriolInference With Integration20:30 - 22:00 Buffet at the Schweizerhof (50 CHF) Plenum22:00 - 24:00+ Cabaret at the Schweizerhof (free) Plenum

4

Page 7: Lenzerheide,January 4-7 2016. - Drexel Universitymwl25/mcmskiV/McmskiV booklet.pdf · 2016. 1. 4. · This booklet contains the abstracts of the 6th IMS-ISBA joint meeting, Bayes

Contents

On the Computational Complexity of High-Dimensional Bayesian Variable Selection (MichaelJordan) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10Is MCMC Dead? (David Dunson) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10Cloudy with a Chance of Bayes (Steven L. Scott) . . . . . . . . . . . . . . . . . . . . . . . . . 11Exact Inference for Diffusion Models and Related MCMC Methodology (Krys Latuszynski) . . . 11Computational Challenges in Molecular Dynamics (Tony Lelièvre) . . . . . . . . . . . . . . . . 11QMC Tutorial (Art B. Owen) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12Stan Tutorial (Michael Betancourt) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12On the Geometric Ergodicity of Hamiltonian Monte Carlo (Sam Livingstone) . . . . . . . . . . 13Mix & Match Hamiltonian Monte Carlo (Elena Akhmatskaya & Tijana Radivojevic) . . . . . . . . 13Large Scale Bayesian Inference in Cosmology (Jens Jasche) . . . . . . . . . . . . . . . . . . 14Barn Swallow Post-fledging Survival: Using Stan to Fit a Hierarchical Ecological Model (FränziKorner-Nievergelt, Beat Naef-Daenzer & Martin Gruebler) . . . . . . . . . . . . . . . . . 14Quasi-Monte Carlo Sampling: Beyond the Unit Cube (Art Owen) . . . . . . . . . . . . . . . . . 15Improving Simulated Annealing through Derandomization (Mathieu Gerber & Luke Bornn) . . . . 15Measuring Sample Quality with Stein’s Method (Lester Mackey) . . . . . . . . . . . . . . . . . 15Comparing MCMC to Variational Approaches in a Model for Bayesian Ordination of Multitable,Discrete Data (Sergio Bacallado) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16Bayesian Nonparametric Sparse Graph Models (François Caron) . . . . . . . . . . . . . . . . 16Posterior Contraction of the Latent Population Polytope in Admixture Models (XuanLong Nguyen) 16Posteriors, Conjugacy, and Exponential Families for Completely Random Measures (TamaraBroderick) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17Modeling the Neutral Evolution of Bacterial Genomes (Jukka Corander) . . . . . . . . . . . . . 17Informative Selection Priors in Risk Prediction with Molecular Data (Manuela Zucknick) . . . . 18Bayesian Approaches for Complex Biological Networks (Francesco Stingo) . . . . . . . . . . . 18Bayesian Parametric Bootstrap for Models with Intractable Likelihoods (Brenda Vo) . . . . . . 19Variance Reduction for Doubly Intractable Likelihood Problems (Chris Oates) . . . . . . . . . . 19On Consistency of Approximate Bayesian Computation (Gael Martin) . . . . . . . . . . . . . . 19Markov Chain Monte Carlo in High-dimension with Heavy-tailed Target Probability Distributions(Kengo Kamatani) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20Scaling Limits of Non-Reversible Metropolis-Hastings Chains (Joris Bierkens) . . . . . . . . . 20ProximalMCMCMethods and theConfidence in ImageProcessingwith ConvexModels (MarceloPereyra) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20Forecast and Parameter Uncertainty of Chaotic Dynamic Systems (Heikki Haario) . . . . . . . 21On the Low-dimensional Structure of Bayesian Inference with Transport Maps (Alessio Spantini) 21Fitting Lateral Transfer: MCMC for a Phylogenetic Likelihood Obtained from a Sequence of Mas-sive Linear Systems of ODE Initial Value Problems (Geoff Nicholls & Luke Kelly) . . . . . . 21Exact Simulation of the Wright-Fisher Diffusion (Paul Jenkins) . . . . . . . . . . . . . . . . . 22Perfect Simulation from the Stationary Distribution of a Uniformly Ergodic Markov Chain with aProper Atom (Anthony Lee) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22A Study in Scarlet and Other Shades of Red (Chang-han Rhee) . . . . . . . . . . . . . . . . . 22

5

Page 8: Lenzerheide,January 4-7 2016. - Drexel Universitymwl25/mcmskiV/McmskiV booklet.pdf · 2016. 1. 4. · This booklet contains the abstracts of the 6th IMS-ISBA joint meeting, Bayes

Practical Unbiased Monte Carlo for Intractable Models (Sebastian Vollmer) . . . . . . . . . . . 23Probabilistic Numerics: Treating Numerical Computation as Learning (Roman Garnett) . . . . . 23On the Relation between Bayesian and Classical Quadratures (Simo Sarkka) . . . . . . . . . . 23Obtaining Probabilistic Integration Rules from Monte Carlo-based Methods (François-Xavier Briol) 24Multi-resolution Approximations for Big Spatial Data (Matthias Katzfuss) . . . . . . . . . . . . 24Identifying Trends in the Spatial Errors of a Regional Climate Model via Clustering (VeronicaBerrocal) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25Information from Cosmology Experiments (Adam Amara) . . . . . . . . . . . . . . . . . . . . 25Observation-basedBlendedProjections fromEnsembles of Regional ClimateModels (Dorit Ham-merling) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26On Nearest-Neighbor Gaussian Process Models for High-Dimensional Spatiotemporal Datasets(Sudipto Banerjee) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26Toward Efficient MCMC for Some High-dimensional Latent Variable Models (Murali Haran) . . . 27Calibrating an Ice Sheet Model Using High-dimensional Binary Spatial Data (Won Chang) . . . . 27Fast, Fully Bayesian Spatiotemporal Inference for fMRI Data (John Hughes) . . . . . . . . . . . 27On Convergence Diagnostics for Adaptive MCMC (Winfried Barta) . . . . . . . . . . . . . . . 28Geometric Convergence of Gibbs Samplers for Bayesian Scale-UsageModels (AndrewN. Olsen1

and Radu Herbei2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28A Practical Sequential Stopping Rule for High-Dimensional MCMC (James M. Flegal) . . . . . . 28A Comparison Theorem for Data Augmentation Algorithms with Applications (Hee Min Choi1 &James P. Hobert2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29Continuous-Time Importance Sampling (Paul Fearnhead) . . . . . . . . . . . . . . . . . . . . 29The Hierarchical Particle Filter (Adam Johansen) . . . . . . . . . . . . . . . . . . . . . . . . 29Fluctuations and Stability of Distributed Particle Filters with Local Exchange (Kari Heine) . . . 30Pseudo-Marginal Monte Carlo Optimisation (Axel Finke) . . . . . . . . . . . . . . . . . . . . . 30An Overview of Noisy MCMC and SMC (Richard Everitt) . . . . . . . . . . . . . . . . . . . . . 30Stability of Noisy Metropolis-Hastings (Felipe Medina Aguayo) . . . . . . . . . . . . . . . . . 31Pseudo-likelihood Accelerated Pseudo-marginal Metropolis-Hastings (Jere Koskela) . . . . . . 31On Markov Chain Monte Carlo for Tall Data (Rèmi Bardenet) . . . . . . . . . . . . . . . . . . . 31Linear ResponseMethods for Accurate Covariance Estimates fromMean Field Variational Bayes(Ryan Giordano) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32Scalable inference for nonparametric latent feature models (Sinead Williamson) . . . . . . . . 32Determinantal Point Process Priors for Latent Biologic Structure - Modeling and Posterior Sim-ulation (Yanxun Xu) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32Modeling and Inference with Feature Allocation Models (Peter Müller1, Juhee Lee2, Yanxun Xu3

& Yuan Ji4) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33Effective Bayesian Nonparametric Inference of Variants in Repetitive Time Series (Wesley Tansey) 33A Moment-matching Ferguson & Klass Algorithm (Julyan Arbel1 and Igor Prünster2) . . . . . . 33Hazard Mixture Models for the Analysis of Clustered Time-to-event data (Bernardo Nipoti1, Ale-jandro Jara2 and Michele Guindani3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34ABC for High-dimensional Selection of Irregular Models (David Rossell) . . . . . . . . . . . . . 34High-performance Computing for High-dimensional Bayesian Model Selection (Marc Suchard) . 35Bayesian Model Selection Beyond Linear Models (Donatello Telesca) . . . . . . . . . . . . . . 35Trust-Region Updates for Streaming Variational Inference (Matt Hoffman) . . . . . . . . . . . 35Automatic Variational Inference in Stan (Alp Kucukelbir) . . . . . . . . . . . . . . . . . . . . . 36Variational Approximations for Gaussian Process models (James Hensman) . . . . . . . . . . 36Streaming, Distributed Bayesian Nonparametric Inference via Component Identification (TrevorCampbell) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36Adaptive, Delayed-Acceptance MCMC for Targets with Expensive Likelihoods (Chris Sherlock) . 37Adapting to Model Structure (Jim Griffin) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

6

Page 9: Lenzerheide,January 4-7 2016. - Drexel Universitymwl25/mcmskiV/McmskiV booklet.pdf · 2016. 1. 4. · This booklet contains the abstracts of the 6th IMS-ISBA joint meeting, Bayes

Adaptive MCMC For Everyone (Jeff Rosenthal) . . . . . . . . . . . . . . . . . . . . . . . . . 38Adaptation within Exact Approximations of MCMC (Matti Vihola) . . . . . . . . . . . . . . . . 38Gasynchronous Distributed Gibbs Sampling (Alexander Terenin1, Daniel Simpson2, and DavidDraper1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39MCMC for a Class of Infinite Dimensional Models (Andriy Norets) . . . . . . . . . . . . . . . . 39Quantum Computing and MCMC (Yazhen Wang) . . . . . . . . . . . . . . . . . . . . . . . . . 40Informed MCMC Proposals in Discrete Spaces (Giacomo Zanella) . . . . . . . . . . . . . . . . 40Probabilistic NumericalMethods for the Solution of Nonlinear Partial Differential Equations (JonCockayne1, Chris Oates2, Tim Sullivan1 and Mark Girolami4) . . . . . . . . . . . . . . . . 41ABC Parameter Estimation: the One Problem One Forest Approach (Jean-Michel Marin1, PierrePudlo2, Louis Raynal1, Mathieu Ribatet1 & Christian P. Robert3) . . . . . . . . . . . . . . 41BeyondWorst-caseMixing Times forMarkovChains (MaximRabinovich, Aaditya Ramdas, MichaelI. Jordan & Martin J. Wainwright) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42Fast Likelihood-free Inference via Bayesian Optimization (Michael U. Gutmann and Jukka Coran-der) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42An Unbiased and Scalable Monte Carlo Method for Bayesian Inference for Big Data (MurrayPollock1, Paul Fearnhead2, Adam M. Johansen1 & Gareth O. Roberts1) . . . . . . . . . . . 42CouplingMonte Carlo Approximation and OptimizationMethods for Challenging Inference Prob-lems (Gersende Fort) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43Moment Conditions and Bayesian Nonparametrics (Luke Bornn1, Neil Shephard2 & Reza Solgi2) 44The Small Clustering Problem: When Cluster Sizes Don’t Grow with the Number of Data Points(Rebecca C. Steorts) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44How to Shape Risk Appetite in Presence of Franchise Value? (Cecilia Aquila and Giovanni Barone-Adesi ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45A Comparison of MCMC for Big Data (Jack Baker1, Paul Fearnhead1, Emily Fox2 and ChristopherNemeth1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45Accelerating Metropolis–Hastings Algorithms by Delayed Acceptance (Marco Banterle1, ClaraGrazian2, Anthony Lee3 and Christian P. Robert4) . . . . . . . . . . . . . . . . . . . . . . 45Bayesian Spatiotemporal Boundary Detection for Diagnosing Progression of Glaucoma UsingVisual Field Data (Samuel I. Berchuck1, Joshua L. Warren2 and Amy H. Herring1) . . . . . 45Probabilistic Integration with theoretical Guarantees (François-Xavier Briol) . . . . . . . . . . . 45Bayesian Approach to Co2Retrievals for the Oco-2 Instrument Using a Surrogate Forward-Model(Jenny Brynjarsdottir1, Amy Braverman2 and Jonathan Hobbs2) . . . . . . . . . . . . . . 46Weighted Particle Tempering (Marcos Carzolio and Scotland Leman) . . . . . . . . . . . . . . 46Adaptive Gibbs Sampler (Cyril Chimisov, Krys Latuszynski and Gareth O. Roberts) . . . . . . . . 46A Conservative Variance Estimation Method for Multivariate MCMC (Ning Dai) . . . . . . . . . 46Approximate BayesianComputation for Semi-Parametric Problems (Clara Grazian 1, 2 andBruneroLiseo1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46On the Identifiability of TransmissionDynamicModels for InfectiousDiseases (Jarno Lintusaari1,Michael U. Gutmann1, 2, Samuel Kaski1 and Jukka Corander2) . . . . . . . . . . . . . . . 46Block Hyper-G Priors in Bayesian Regression (Christopher M. Hans1, Agniva Som2 and StevenN. MacEachern1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46Bridging BetweenVariational Bayes andTrue Posterior ViaMCMC (Daniel Hernandez-Stumpfhauser1,David B. Dunson2 and Amy H. Herring1) . . . . . . . . . . . . . . . . . . . . . . . . . . 47Next-Generation Gibbs-Type Samplers: Combining Strategies to Boost Efficiency (Xiyun Jiaoand David A. van Dyk) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47Non-informative Reparameterisations for Location-Scale Mixtures (Kaniav Kamary1, Kate Lee2

and Christian P. Robert1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47Investigating Lateral Transfer on Phylogenetic Trees – Exact Inference Using Massive Systemsof Differential Equations (Luke Kelly and Geoff Nicholls) . . . . . . . . . . . . . . . . . . 477

Page 10: Lenzerheide,January 4-7 2016. - Drexel Universitymwl25/mcmskiV/McmskiV booklet.pdf · 2016. 1. 4. · This booklet contains the abstracts of the 6th IMS-ISBA joint meeting, Bayes

Pseudo-Marginal Metropolis Light Transport (Joel Kronander1, Thomas B. Schön2 and JonasUnger1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47A Bayesian Estimate of the Pricing Kernel (Giovanni Barone-Adesi1, Chiara Legnazzi1 and Antoni-etta Mira2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47On the Asymptotic Behaviour of ABC (Wentao Li and Paul Fearnhead) . . . . . . . . . . . . . . 47Baby Reversible Jump for Model Choice (John C. Liechty1, Merrill W. Liechty2, Murali Huran1 andEphraim Hanks1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48Bayesian Predictive Modeling for Personalized Treatment Selection in Oncology (Junsheng Ma,Francesco Stingo, and Brian P. Hobbs) . . . . . . . . . . . . . . . . . . . . . . . . . . . 48How to Sample from a Distribution When Only the Moments Are Known with an Application toAffine Models (Filippo Macaluso 1, Antonietta Mira2 and Paul Schneider1) . . . . . . . . . 48Regularized Supervised TopicModels for High-DimensionalMulti-Class Regression (MånsMagnusson1,Leif Jonsson2 and Mattias Villani1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48Adaptive Incremental Mixture Markov Chain Monte Carlo (Florian Maire1, Nial Friel1, AntoniettaMira2 and Adrian Raftery3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48On Approximately Simulating Conditioned Diffusions (Sean Malory and Chris Sherlock) . . . . . 48Cheeger Inequalities for the Mixing Times of Hamiltonian MCMC (Oren Mangoubi1 and NateshPillai2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49Accelerating Bayes Inference for Evolutionary BiologyModels (XavierMeyer1,2, Bastien Chopard1

and Nicolas Salamin2 ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49Optimal Scaling of Particle andPseudo-MarginalMetropolis-Adjusted Langevin Algorithms (ChrisNemeth) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49Variational Consensus Monte Carlo (Maxim Rabinovich, Elaine Angelino and Michael I. Jordan) . 49A Gaussian Process Latent Variable Model for Single Cell Pseudotime Estimation (John Reidand Lorenz Wernisch) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49Pseudo-MarginalMCMC for Parameter Estimation inα-Stable Distributions (Marina Riabiz, FredrikLindsten and Simon Godsill) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49Ensemble Kalman Particle Filter For Convective Scale Data Assimilation (Sylvain RobertandHans R. Künsch ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49From Data to Models in Conveying HIV/STD Prevalence Information to the Public: How Multi-Level Models Can Be Used To Improve Inference And Better Ensure the Anonymity ofReleased Information (Cody T. Ross1 and Karl J. Frost2) . . . . . . . . . . . . . . . . . . 50Gradient Importance Sampling (Ingmar Schuster) . . . . . . . . . . . . . . . . . . . . . . . . 50Some Contributions to Sequential Monte Carlo Methods for Option Pricing (Deborshee Sen) . . 50A Bayesian Nonparametric Approach to the Analysis of High Dimensional Longitudinal DataSets (Kan Shang and Cavan Reilly) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50Bayes Estimates of the Diversification of International Markets with Hierarchical Copulas andVines (Alexander Knyazev1, Oleg Lepekhin 1 and Arkady Shemyakin2 ) . . . . . . . . . . . 50Increased Levels of Co-Infection Reavealed with an Approximate Bayesian Computation Ap-proach (Jukka Sirén1, Benoit Barrès1,2 and Anna-Liisa Laine1) . . . . . . . . . . . . . . . 50Inferring Smart Home User Status with Particle MCMC (Jonathan Steinhart1, 2) . . . . . . . . 51Using Bayesian Computing to Solve a Complex Problem in Astrophysics (1 David C. Stenning,Rachel Wagner-Kaiser2, David A. van Dyk3, Ted von Hippel4, Nathan Stein5, Elliot Robinson6

and Ata Sarajedini3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51Gradient-FreeHamiltonianMonte Carlowith Efficient Kernel Exponential Families (Heiko Strathmann1,Dino Sejdinovic2, Samuel Livingstone3, Zoltan Szabo1 and Arthur Gretton1) . . . . . . . . 51Improving the Efficiency of the Parallel Tempering Algorithm (Nicholas Tawn and Gareth Roberts) 51Accuracy and Validity of Posterior Quantiles in Bayesian Inference Using Empirical Likelihoods(Laura Turbatu and Elvezio Ronchetti) . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

8

Page 11: Lenzerheide,January 4-7 2016. - Drexel Universitymwl25/mcmskiV/McmskiV booklet.pdf · 2016. 1. 4. · This booklet contains the abstracts of the 6th IMS-ISBA joint meeting, Bayes

Spatio-temporal Species Distribution Model to Detect Outbreaks of Coral Consuming Crown-of-Thorns Starfish in the Great Barrier Reef (Jarno Vanhatalo1, Geoff Hosack2 and HughSweatman3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51Multivariate Output Analysis for Markov Chain Monte Carlo (Dootika Vats1, James M. Flegal2,and Galin L. Jones1) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52A Comparison of Different Strategies for a Particle Markov Chain Monte Carlo Algorithm: Appli-cation to Plant Growth Models (Gautier Viaud1 and Paul–Henry Cournède1) . . . . . . . . 52Likelihood-Free Methods for Stochastic Models of Collective Cell Spreading (1,2 Brenda N. Vo) . 52On the Poisson Equation for Metropolis-Hastings Chains (Aleksandar Mijatović and Jure Vogrinc) 52Self-Tuning Metropolis-Hastings Moves (Christopher Sherlock and Lianting Xue) . . . . . . . . 52Assessing Monte Carlo Standard Error in Diffusion Meganetic Resonance Imaging (Yang Yang) 52

Author Index 53

9

Page 12: Lenzerheide,January 4-7 2016. - Drexel Universitymwl25/mcmskiV/McmskiV booklet.pdf · 2016. 1. 4. · This booklet contains the abstracts of the 6th IMS-ISBA joint meeting, Bayes

PLENARY SPEAKERS

On the Computational Complexity of High-Dimensional Bayesian Variable SelectionMichael Jordan

(University of California, Berkeley)We study the computational complexity of Markov chain Monte Carlo (MCMC) methods for high-dimen-sional Bayesian linear regression under sparsity constraints. We first show that a Bayesian approach canachieve variable-selection consistency under relatively mild conditions on the design matrix. We thendemonstrate that the statistical criterion of posterior concentration need not imply the computationaldesideratum of rapid mixing of the MCMC algorithm. By introducing a truncated sparsity prior for variableselection, we provide a set of conditions that guarantee both variable-selection consistency and rapidmixing of a particular Metropolis-Hastings algorithm. Themixing time is linear in the number of covariatesup to a logarithmic factor. Our proof controls the spectral gap of the Markov chain by constructing acanonical path ensemble that is inspired by the steps taken by greedy algorithms for variable selection.[Joint work with Yun Yang and Martin Wainwright.]

Is MCMC Dead?David Dunson

(Duke University, USA)MCMC methods and related sampling algorithms represented a revolution in Bayesian statistics start-ing in the early 1990s and continuing into the 21st century. However, we are currently living in the ageof “big data” in which machine learning, scalable algorithms, and optimization have increasingly domi-nated practice. This raises a natural question about whether Bayesians need to put away their old tools,such as MCMC, and be properly trained in modern optimization and computation. It is now routine todesign algorithms within the MapReduce framework to exploit distributed computing platforms and largeclusters. Classical MCMC algorithms seem poorly equipped for this new paradigm. However, existingoptimization-based approaches, such as variational Bayes, can?t hold a candle to MCMC in terms of ac-curate uncertainty quantification. With this motivation, I propose several new approaches to resuscitateMCMC including embarrassingly parallel MCMC (EP-MCMC) and approximate MCMC (aMCMC). I brieflyhighlight strong theoretical support for these broad classes of scalable algorithms, and illustrate theirperformance through several examples including up to 100s of millions of observations.

10

Page 13: Lenzerheide,January 4-7 2016. - Drexel Universitymwl25/mcmskiV/McmskiV booklet.pdf · 2016. 1. 4. · This booklet contains the abstracts of the 6th IMS-ISBA joint meeting, Bayes

Cloudy with a Chance of BayesSteven L. Scott

(Google)Monte Carlo based Bayesian inference has historically been CPU constrained. The massive data sets ofthe “Big Data” era further impose memory and disk constraints. Cloud computing offers a cheap way tobring hundred, thousands, or even tens of thousands of machines to bear on a problem at minimal cost(currently as low as 1.3 cents per hour per core). Cloud computing offers effectively infinite CPU, memory,and disk, but it brings its own challenges. Communicating between machines is expensive, and machinescan either fail or (especially at low price points) be preempted by higher priority jobs. Effective Bayesianinference in a shared cloud computing environment requires that the different machines be able to runasynchronously, with minimal communication.The consensus Monte Carlo algorithm (CMC) attacks the challenges of cloud computing by partition-ing the data amongworkermachines. Eachmachine produces a full Monte Carlo sample from its posteriordistribution given its data, and then a “consensus” approximation to the full-data posterior is formed bycombining the draws across worker machines. For continuous unimodal posteriors determined by a fewmoments, simple weighted averages of draws are an effective and reliable method of forming consensus.For discrete valued or multi-modal posteriors more elaborate combination methods are needed.

Exact Inference for Diffusion Models and Related MCMC MethodologyKrys Latuszynski

(University of Warwick, UK)Inference for stochastic differential equations relies traditionally on Euler-type approximations that intro-duce systematic bias to the likelihood functions. The approach results in inference errors difficult to quan-tify and in computational inefficiency due to balancing bias and variance. Exact Algorithms (Beskos, Pa-paspiliopoulos, Roberts) allow for sampling without discretisation error for a class of diffusion processand can be used as a vehicle for exact inference, where the only source of error is the Monte Carlo (and nota systematic approximation error). I will discuss the design of several such Monte Carlo algorithms aimedat inference in complexmodels, includingMarkov switching diffusions, jump diffusions, or diffusion drivenCox process and highlight the methodological challenges that arise in this intractable likelihood contextand lead to Bernoulli Factories or optimal scaling of the Barker’s algorithm. (This is joint work with FlavioGoncalves, Jan Palczewski, Omiros Papaspiliopoulos, Gareth Roberts).

Computational Challenges in Molecular DynamicsTony Lelièvre

(Ecole des Ponts ParisTech)I will present some sampling problems raised bymolecular dynamics simulations. The first part of the talkwill be devoted to free energy calculations and adaptive biasing techniques. The second part will focuson problems raised by the efficient sampling of trajectories. In both cases, the difficulties are related tothe metastability of the stochastic processes which are used to model the matter at the atomistic level.This metastability is linked to the multimodality of the underlying statistical ensemble. We will reviewsome numerical techniques which have been proposed to overcome these difficulties, as well as somemathematical developments which underpin these algorithms.

11

Page 14: Lenzerheide,January 4-7 2016. - Drexel Universitymwl25/mcmskiV/McmskiV booklet.pdf · 2016. 1. 4. · This booklet contains the abstracts of the 6th IMS-ISBA joint meeting, Bayes

TUTORIALS

QMC TutorialArt B. Owen

(Stanford University, USA)Quasi-Monte Carlo (QMC) sampling is an alternative to plain Monte Carlo (MC) sampling. In their basicforms, both use simple averages of function values at a set of input points. WhereMC takes randompoints,QMC takes strategically arranged points designed to be more uniform than random points are. Where MCattains an RMSE of O(n−1/2), QMC can attain an error rate of O(n−1+ε).This survey is a somewhat personal, statistician’s view of QMC. It presents QMC as stratification takento the limits of what is possible mathematically. Foundations include discrepancy theory and the Koksma-Hlawka theorem. There are digital constructions from van der Corput through to the recent higher ordernets of Josef Dick as well as integration lattices.Plain QMC lacks the easy error estimation methods of MC. A hybrid randomized QMC (RQMC) al-lows error estimation while preserving the QMC properties. Surprisingly, RQMC can reduce the RMSE to

O(n−3/2+ε) for smooth enough problems. The talk will describe some recent work of Chopin and Gerberon applying QMC ideas to particle sampling as well as approaches to employing QMC in Markov chainMonte Carlo.The main forum for QMC research is the MCQMC series of conferences. MCQMC 2016 will be held atStanford University, August 14-19, 2016.

Stan TutorialMichael Betancourt

(University of Warwick, UK)This tutorial will aim to provide an interaction introduction to the use of Stan in R with RStan andShinyStan. We’ll write and fit a few models, analyze those fits using numerical and visual diagnostics, andinvestigate potential solutions for poor fits. Anyone interested in attending is encouraged to downloadthe latest versions of RStan and ShinyStan before the conference. For help with installation issues pleaseconsult the Stan Users’ List.RStan 2.8.2:

https://cran.r-project.org/web/packages/rstan/index.html,ShinyStan 2.0.1:https://cran.r-project.org/web/packages/shinystan/index.html

12

Page 15: Lenzerheide,January 4-7 2016. - Drexel Universitymwl25/mcmskiV/McmskiV booklet.pdf · 2016. 1. 4. · This booklet contains the abstracts of the 6th IMS-ISBA joint meeting, Bayes

INVITED SESSIONS

Hamiltonian Monte CarloOrganizer: Michael Betancourt

On the Geometric Ergodicity of Hamiltonian Monte CarloSam Livingstone

(University College London, UK)Hamiltonian Monte Carlo (HMC) has proven to be an empirical success, but there are still many openquestions when it comes to understanding the method rigorously. From the Markov chains perspective,the method is only known to produce an irreducible chain in the case where the target density π(x) isbounded below by some positive constant, and there is little discussion of rates of convergence. Herewe consider ergodicity properties of HMC. After exploring the issue, we present some general and somespecific results isolating when the method will and will not produce a geometric Markov chain, at timesrestricting attention to the one dimensional class of targets π(x) ∝ exp(−|x|β) for β > 0. We show thatin this case, if the integration time parameter T is fixed then geometric ergodicity will occur essentiallyif 1 ≤ β ≤ 2, as in the case of the Metropolis-adjusted Langevin algorithm. Intriguingly, however, if theintegration time T is allowed to depend on position, then a version of the method can be designed whichwill produce a geometrically ergodic chain for any 0 < β ≤ 2.

Mix & Match Hamiltonian Monte CarloElena Akhmatskaya & Tijana Radivojevic

(Basque Center for Applied Mathematics, Ikerbasque, Spain)Hamiltonian (Hybrid) Monte Carlo (HMC) method, initially proposed in High Energy Physics, is becom-ing a popular tool for solving complex and intractable problems of statistical inference. We introduce anumber of modifications in the original formulation of the HMC in order to enhance sampling from high-dimensional or strongly correlated target densities. The new features include a modified Metropolis test,an updated momentum refreshment step, alternative integrating and annealing schemes. All alterationshave been formulated and implemented within the Generalized Shadow Hybrid Monte Carlo framework,earlier proposed by the authors for simulation of molecular systems.

13

Page 16: Lenzerheide,January 4-7 2016. - Drexel Universitymwl25/mcmskiV/McmskiV booklet.pdf · 2016. 1. 4. · This booklet contains the abstracts of the 6th IMS-ISBA joint meeting, Bayes

Large Scale Bayesian Inference in CosmologyJens Jasche

(Technical University Munich, Germany)Presently proposed and designed future cosmological probes and surveys permit us to anticipate the up-coming avalanche of cosmological information during the next decades. The increase of valuable obser-vations needs to be accompanied with the development of efficient and accurate information processingtechnology in order to analyse and interpret this data. Besides traditional systematics and uncertaintiessuch as survey geometries and observational noise, modern data analysis needs to account for the com-plex statistical properties of gravitationally evolved matter fields and also has to provide correspondinguncertainty quantification. The analysis of the structure and evolution of our inhomogeneous Universetherefore requires to solve non-linear statistical inference problems in very high dimensional parameterspaces, involving on the order of 107 or more parameters. For these reasons, in this talk I will address theproblem of high dimensional Bayesian inference from cosmological data sets via the recently proposedBORG algorithm. This method couples an approximate model of structure formation to an Hybrid MonteCarlo algorithm providing a fully probabilistic, physical model of the non-linearly evolved density field asprobed by galaxy surveys. Besides highly accurate and detailed measurements of three dimensional cos-mic density and velocity fields, this methodology also infers plausible formation histories for the observedlarge scale structure. In this talk I will give an overview over this promising path towards Bayesian chrono-cosmography, the subject of inferring the four dimensional state of our Universe from observations.

Barn Swallow Post-fledging Survival: Using Stan to Fit a Hierarchical EcologicalModel

Fränzi Korner-Nievergelt, Beat Naef-Daenzer & Martin Gruebler(Swiss Ornithological Institute Sempach, Switzerland)

For juvenile birds, the first days after leaving the nest are a crucial phase in their life history: During thisperiod they gradually become independent of their parents. However, less than 50% of the fledglingsachieve independence, i.e., survive the first month after fledging. Post-fledging survival strongly dependson individual characteristics such as body condition, and on parental decision such as timing of breeding.However, most studies quantifying post-fledging survival neglect that survival of chicks may be family-specific and do not consider dependencies in survival within families. This is mostly because before thedevelopment of Stan, algorithms for fitting hierarchical ecological models have not been implemented inuser-friendly software.We followed barn swallow Hirundo rustica fledglings during the first three weeks after fledging usingradio-telemetry and investigated factors that influence their survival. We modelled daily survival at thefamily- and population-level. The model took into account that detection probability differed betweenfamilies and between the type of radiotransmitter. The hierarchical Cormack-Jolly-Seber model was fittedusing Hamiltonian Monte Carlo in Stan.Between-family variance in survival was substantial and affected conclusions drawn when it was ig-nored. We further found that the longer parents cared for their fledglings the higher was the proportionthat survived until independence. The study illustrates that including random family effects in analyses ofpost-fledging survival can yield more detailed insight into the causal mechanisms affecting the numberof offspring achieving independence. The program Stan gives scope for efficient modelling of complexhierarchical biological relationships.

14

Page 17: Lenzerheide,January 4-7 2016. - Drexel Universitymwl25/mcmskiV/McmskiV booklet.pdf · 2016. 1. 4. · This booklet contains the abstracts of the 6th IMS-ISBA joint meeting, Bayes

QMCOrganizer: Nicolas ChopinQuasi-Monte Carlo Sampling: Beyond the Unit Cube

Art Owen(Stanford University)

The vastmajority of QMCwork has been devoted to sampling the unit hypercube, often in high dimensionalsettings. The traditional approach to sampling other spaces such as the sphere or simplex has been toemploy a measure preserving transformation from the unit cube to the domain of interest. The result isa QMC integration problem using the composition of that transformation and the integrand of interest.The composition is usually not smooth enough to fully benefit from QMC. In this work we present QMCmethods built inside those other spaces, especially the triangle. One construction mimics the van derCorput sequence using a recursive partition of the space. Another resembles the Kronecker sequence.The first construction extends to Cartesian products of spaces via digital nets and scrambling those netsincreases accuracy for smooth enough integrands. (This is joint work with Kinjal Basu.)Improving Simulated Annealing through Derandomization

Mathieu Gerber & Luke Bornn(Harvard University, USA)

We propose and study a quasi-Monte Carlo (QMC) version of simulated annealing (SA) on continuous statespaces. The convergence of this new deterministic optimization method, which we refer to as QMC-SA, isproved both in the case where the same Markov kernel is used throughout the course of the algorithm andin the case where it shrinks over time to improve local exploration. The theoretical guarantees for QMC-SAare stronger than those for classical SA, for example requiring no objective-dependent conditions on thealgorithm’s cooling schedule and allowing for convergence results even with time-varying Markov kernels(which, for Monte Carlo SA, only exist for the convergence in probability). We further explain how ourresults in fact apply to a broader class of optimizationmethods including for example threshold accepting,for which to our knowledge no convergence results currently exist, and show how randomness can bere-introduced to get a stochastic version of QMC-SA which exhibits (almost surely) the good theoreticalproperties of the deterministic algorithm. We finally illustrate the superiority of QMC-SA over SA algorithmsin a numerical study, notably on a non-differentiable and high dimensional optimization problem borrowedfrom the spatial statistics literature.Measuring Sample Quality with Stein’s Method

Lester Mackey(Stanford University)

To improve the efficiency of Monte Carlo estimation, practitioners are turning to biased Markov chainMonte Carlo procedures that trade off asymptotic exactness for computational speed. The reasoning issound: a reduction in variance due to more rapid sampling can outweigh the bias introduced. However,the inexactness creates new challenges for sampler and parameter selection, since standard measures ofsample quality like effective sample size do not account for asymptotic bias. To address these challenges,we introduce a new computable quality measure based on Stein’s method that bounds the discrepancy be-tween sample and target expectations over a large class of test functions. We use our tool to compareexact, biased, and deterministic sample sequences and illustrate applications to hyperparameter selec-tion, convergence rate assessment, and quantifying bias-variance tradeoffs in posterior inference.

15

Page 18: Lenzerheide,January 4-7 2016. - Drexel Universitymwl25/mcmskiV/McmskiV booklet.pdf · 2016. 1. 4. · This booklet contains the abstracts of the 6th IMS-ISBA joint meeting, Bayes

Bayesian NonParametricsOrganizers: Tamara Broderick and Igor PruensterComparing MCMC to Variational Approaches in a Model for Bayesian Ordination of

Multitable, Discrete DataSergio Bacallado

(Stanford University)Our model is motivated by the analysis of species sampling from many environments. The prior on thedistribution of species in each environment is a well-known normalized random measure, and the correla-tions between environments are determined by factors which can be latent (random effects) or observed(fixed effects). We compare several Gibbs sampling approaches to a variational method for approximatingthe posterior distribution of these factors. This is joint work with Boyu Ren, Stefano Favaro, and LorenzoTrippa.

Bayesian Nonparametric Sparse Graph ModelsFrançois Caron

(University of Oxford, UK)In this talk, I will present a Bayesian nonparametricmodel of graphs using exchangeable randommeasureson the plane. The construction builds on the framework of completely random measures (CRM). Forcertain classes of CRMs, the associated graphs are sparse with power-law degree distributions, with asingle parameter tuning the sparsity of the graph. Posterior inference of the parameters of the graphcan be carried out through a Markov chain Monte Carlo algorithm that alternates between Hamiltonianand Metropolis-Hastings updates. I then explore network properties in a range of real datasets, includingFacebook social circles, a political blogosphere, protein networks, citation networks, and world wide webnetworks, including networks with hundreds of thousands of nodes and millions of edges.

Posterior Contraction of the Latent Population Polytope in Admixture ModelsXuanLong Nguyen

(University of Michigan, USA)We study the posterior contraction behavior of the latent population structure that arises in admixturemodels as the amount of data increases. We adopt the geometric view of admixture models – alterna-tively known as topic models – as a data generating mechanism for points randomly sampled from theinterior of a (convex) population polytope, whose extreme points correspond to the population structurevariables of interest. Rates of posterior contraction are established with respect to Hausdorff metric and aminimummatching Euclideanmetric defined on polytopes. Tools developed include posterior asymptoticsof hierarchical models and arguments from convex geometry. We also present experiments with simulatedand real data, which demonstrate the contraction behavior of the posterior distributions obtained by anMCMC algorithm.

16

Page 19: Lenzerheide,January 4-7 2016. - Drexel Universitymwl25/mcmskiV/McmskiV booklet.pdf · 2016. 1. 4. · This booklet contains the abstracts of the 6th IMS-ISBA joint meeting, Bayes

Posteriors, Conjugacy, and Exponential Families for Completely Random MeasuresTamara Broderick

(MIT, USA)We demonstrate how to calculate posteriors for general Bayesian nonparametric priors and likelihoodsbased on completely randommeasures (CRMs). We further show how to represent Bayesian nonparamet-ric priors as a sequence of finite draws using a size-biasing approach—and how to represent full Bayesiannonparametric models via finite marginals. Motivated by conjugate priors based on exponential familyrepresentations of likelihoods, we introduce a notion of exponential families for CRMs, which we call expo-nential CRMs. This construction allows us to specify automatic Bayesian nonparametric conjugate priorsfor exponential CRM likelihoods. We demonstrate that our exponential CRMs allow particularly straight-forward recipes for size-biased and marginal representations of Bayesian nonparametric models. Alongthe way, we prove that the gamma process is a conjugate prior for the Poisson likelihood process and thebeta prime process is a conjugate prior for a process we call the odds Bernoulli process. We deliver asize-biased representation of the gamma process and a marginal representation of the gamma processcoupled with a Poisson likelihood process.

Bayesian Molecular BiologyOrganizers: Clelia Di Serio and Arnoldo FrigessiModeling the Neutral Evolution of Bacterial Genomes

Jukka Corander(University of Helsinki, Finland)

Microbiologists agree that a bacterial species should be ‘genomically coherent’, even though there is noconsensus on how to define this. The comparison of genomes from the same named species has shownwe can divide genomes into a core of ubiquitous genes, and accessory genes that may or may not bepresent. The ecological significance of this variation in gene content, and how it is compatible with pre-vious models of diversification and speciation that emphasize the unifying force of homologous recom-bination remains unclear. We use a parsimonious model combining diversification in both the core andaccessory genome. Despite of its conceptual simplicity, our model has an intractable likelihood, motivat-ing the use of ABC to infer the parameters. The model fits well to a systematic population sample of 616pneumococcal genomes, capturing the major features of the population structure with parameter valuesthat agree well with previous empirical estimates. The model does not include explicit selection on indi-vidual genes, suggesting that crude comparisons of gene content may be a poor predictor of ecologicalfunction. Moreover, comparing with empirical observations, we find a clearly divergent subpopulation ofpneumococci that are inconsistent with the model and may be considered genomically incoherent withthe rest of the population. These strains have a distinct disease tropism and may be rationally defined asa separate species.

17

Page 20: Lenzerheide,January 4-7 2016. - Drexel Universitymwl25/mcmskiV/McmskiV booklet.pdf · 2016. 1. 4. · This booklet contains the abstracts of the 6th IMS-ISBA joint meeting, Bayes

Informative Selection Priors in Risk Prediction with Molecular DataManuela Zucknick

(University of Helsinki, Finland)In oncological clinical trials genome-wide data for multiple molecular data types covering transcriptomics(such as gene expression), epigenomics (e.g. CpG methylation) and genomics (copy number variationetc.) is routinely collected, while the number of patients is usually restricted by the trial design. In thiscontext, Bayesian hierarchical models for regression can be used for prediction of therapy response orpatient survival using high-dimensional molecular data as input data. Automatic variable selection canbe achieved by including adaptive shrinkage priors (e.g. Bayesian lasso) or selection indicators (spike-and-slab priors). Integration of data from several molecular sources via informative selection priors canimprove the performance of predictionmodels and the efficiency in variable selection, lead to new insightsinto the disease biology - and improve computational efficiency by effectively restricting the model space.As an example, I will introduce a logistic model for the integration of (epi-)genomic data (e.g copynumber variation) and gene expression data. Particular emphasis is put on the careful choice of flexibleprior distributions, and I will demonstrate the effects of these choices with respect to performance inprediction, variable selection, estimation and computational efficiency. In this context, the Lindley-Bartlettparadox is important, which implies that while a small slab variance shrinks the selected coefficients tothe prior mean, increasing the slab variance will result in posterior selection probabilities of the covariatestending to zero. This is further aggravated in logistic regression, where a large prior variancemay induce animplausible prior distribution on the response variable. Current proposals for hyper-priors of this variancemay not be optimal for binary outcomes, and I will explore various approaches in our context of modelswith high-dimensional input data and informative selection priors.This is joint work with Manuel Wiesenfarth and Ana Corbern-Vallet.

Bayesian Approaches for Complex Biological NetworksFrancesco Stingo

(The University of Texas MD Anderson Cancer Center, USA)Multi-dimensional data constituted by measurements along multiple axes have emerged across manyscientific areas such as genomics and cancer surveillance. Traditional multivariate approaches are un-suitable for such highly structured data due to inefficiency, loss of power and lack of interpretability. I willillustrate a novel class of multi-dimensional graphical models that includes both directed and undirectedgraphs as well as arbitrary combinations of these.

18

Page 21: Lenzerheide,January 4-7 2016. - Drexel Universitymwl25/mcmskiV/McmskiV booklet.pdf · 2016. 1. 4. · This booklet contains the abstracts of the 6th IMS-ISBA joint meeting, Bayes

Algorithms for Intractable ProblemsOrganizers: Nial Friel and Kerrie MengersenBayesian Parametric Bootstrap for Models with Intractable Likelihoods

Brenda Vo(Queensland University of Technology, Brisbane, Australia)

In this talk it is demonstrated how the Bayesian parametric bootstrap of Efron (2012) can be adapted tomodels with intractable likelihoods. The approach is most appealing when the semi-automatic approxi-mate Bayesian computation (ABC) summary statistics (estimates of posterior means from regressions)of Fearnhead and Prangle (2012) are selected. After a pilot run of ABC, the likelihood-free parametric boot-strap approach requires very few model simulations to produce an approximate posterior, which can be auseful approximation in its own right. An alternative is to use this approximation as a proposal distributionin other ABC algorithms to make them more efficient. Our work is motivated by estimating the parame-ters of melanoma cell colony expansion. This is joint work with Chris Drovandi and Tony Pettitt of QUT,Brisbane, Australia.Variance Reduction for Doubly Intractable Likelihood Problems

Chris Oates(University of Technology, Sydney)

Many popular statistical models for complex phenomena are “doubly” intractable, in the sense that thelikelihood function involves an intractable normalising constant. Samples from the posterior can in prin-ciple be generated by constructing a Markov chain on an augmented state space (as in e.g. the exchangealgorithm and pseudo-marginal MCMC). However, a bigger state space requires more time for a Markovchain to explore and this can lead to unreasonably high estimator variance, motivating the developmentof variance reduction techniques for doubly intractable problems. Here we describe novel control variatesthat can dramatically reduce estimator variance in this challenging setting. Examples will be presented insocial network analysis, physics-based lattice models and stochastic differential equations.On Consistency of Approximate Bayesian Computation

Gael Martin(Monash University)

Approximate Bayesian computation (ABC)methods have become increasingly prevalent of late, facilitatingas they do the analysis of intractable, or challenging, statistical problems. With the initial focus beingprimarily on the practical import of ABC, exploration of its formal statistical properties has begun to attractmore attention. The aim of this paper is to establish general conditions under which ABC methods areBayesian consistent, in the sense of producing draws that yield a degenerate posterior distribution at thetrue parameter (vector) asymptotically (in the sample size). We derive conditions under which arbitrarysummary statistics yield consistent inference, with these conditions linked to the identification of the trueparameters. Using simple illustrative examples that have featured in the literature, we demonstrate thatidentification, and hence consistency, is unlikely to be achieved in many cases, and propose a simplediagnostic procedure that can indicate the presence of this problem. We also touch upon the link betweenconsistency and the use of auxiliary models within ABC, and illustrate the subsequent results in a simpleLotka- Volterra predator-prey model. Lastly, we explore the relationship between consistency and the useof marginalization to obviate the curse of dimensionality.Joint work with: David T. Fraziery and Christian P. Robert.

19

Page 22: Lenzerheide,January 4-7 2016. - Drexel Universitymwl25/mcmskiV/McmskiV booklet.pdf · 2016. 1. 4. · This booklet contains the abstracts of the 6th IMS-ISBA joint meeting, Bayes

High-Dimensional MCMCOrganizer: Gareth RobertsMarkov Chain Monte Carlo in High-dimension with Heavy-tailed Target Probability

DistributionsKengo Kamatani(Osaka University)

High-dimensional asymptotic theory of Markov chain Monte Carlo methods is well understood for a classof light-tailed target distributions. On the other hand, despite its practical importance, theoretical analysisfor the heavy-tail case is still in its infancy. We review recent results for the study of heavy-tailed target dis-tributions and show some counter intuitive results. Practical implementation issue will also be addressed.

Scaling Limits of Non-Reversible Metropolis-Hastings ChainsJoris Bierkens

(University of Warwick, UK)Non-reversible chainsmay have significantly improved convergence properties in comparison to reversiblechains. Recently several algorithms have been suggested that are able to construct non-reversible chainsfor general target distributions. These innovations bring the use of such chains for MCMC sampling withinreach. However, to employ the new class of non-reversible MCMC algorithms it is of vital importance tounderstand the behaviour of the obtained non-reversible Markov chains. Historically the use of scalinglimits in understanding the behaviour of Markov chains in high dimension has contributed enormouslyto our understanding of MCMC algorithms. In this talk the latest developments in the identification ofscaling limits of non-reversible MCMC algorithms will be presented. Implications for effective usage ofsuch algorithms will be discussed.

Proximal MCMC Methods and the Confidence in Image Processing with ConvexModels

Marcelo Pereyra(Bristol University, UK)

Modern image processing methods rely very heavily on statistical theory and methodology to solve prob-lems; for example, they use statistical models to represent the data observation process and the priorknowledge available, and they obtain solutions by performing statistical inference. Most methods use“non-smooth convex” models, i.e. Bayesian models with very high-dimensional posterior distributionsthat are log-concave and not continuously differentiable. These models deliver accurate results and arecomputationally very attractive because maximum-a-posteriori estimation can be performed almost in-stantaneously by using proximal optimisation algorithms based on convex calculus. However, they arenot well addressed by high-dimensional MCMC methods such as MALA and HMC, which are based ondifferential analysis and often require some degree of smoothness to perform well. In this talk we con-sider the important problem of reporting confidence measures for these image processing models, suchas joint credible regions and pixel-wise marginal confidence intervals. To estimate these credible sets weuse “proximal” MCMC algorithms, a new type of MALA that that uses convex analysis and techniques fromproximal optimisation to explore the parameter space efficiently. The proposed methodology is demon-strated on several challenging image processing problems and real datasets.

20

Page 23: Lenzerheide,January 4-7 2016. - Drexel Universitymwl25/mcmskiV/McmskiV booklet.pdf · 2016. 1. 4. · This booklet contains the abstracts of the 6th IMS-ISBA joint meeting, Bayes

Uncertainty Quantification in Mathematical ModelsOrganizer: Patrick R. ConradForecast and Parameter Uncertainty of Chaotic Dynamic Systems

Heikki Haario(Lappeenranta University of Technology, Finland)

We discuss methods for quantifying the forecast uncertainty of chaotic systems, such as used in numer-ical weather prediction, by ensemble runs with model parameter variations as well as stochastic physics.For proper parameter estimation an approach based on fractal dimension concepts is presented. Theexamples include ODE systems for electrical circuits, and an application for classification of stochasticpattern formation using the 2D FitzHugh-Nagumo model.

On the Low-dimensional Structure of Bayesian Inference with Transport MapsAlessio Spantini

(MIT, USA)A recent approach to Bayesian computation in continuous non-Gaussian settings seeks a deterministictransport map that pushes forward a reference density to the posterior. In this talk, we address the com-putation of the transport map in high dimensions. In particular, we show how the Markov structure ofthe posterior density induces low dimensional parameterizations of the transport map. Topics include thesparsity of inverse triangular transports, the ordering problem, the decomposition of transports, and otherrelated ideas in dimensionality reduction for Bayesian inversion.

Fitting Lateral Transfer: MCMC for a Phylogenetic Likelihood Obtained from aSequence of Massive Linear Systems of ODE Initial Value Problems

Geoff Nicholls & Luke Kelly(University of Oxford, UK)

We give Bayesian sample-based inference for reconstructing the phylogenies of taxa from observationsof binary-valued evolutionary traits, treating the case where trait diversification involves lateral transfer oftraits across the tree. In trait models of lateral transfer, the ancestries of individual evolutionary traits aretree-like, but those trees may conflict with one another and with the overall “species”-level phylogeny.Previous works has yielded models and approximation methods, but stopped short of a full Bayesiananalysis. We propose new models extending the stochastic Dollo model of Nicholls and Gray (2008) andintegrate exactly over all possible lateral transfer events on a tree. Likelihood evaluation is computationallydifficult for trees with more than a few leaves. We show that the likelihood for a tree with L leaves isdetermined from the solution of a sequence of initial value problems given by a sequence of sparse linearsystems of ODE’s of dimension up to 2L − 1. Some key applications have around L = 25 leaves. Wepropose an exact approach exploiting symmetries in Green’s-function solutions for small L and MCMCmethods for intractable likelihoods and approximate schemes for larger values of L.

21

Page 24: Lenzerheide,January 4-7 2016. - Drexel Universitymwl25/mcmskiV/McmskiV booklet.pdf · 2016. 1. 4. · This booklet contains the abstracts of the 6th IMS-ISBA joint meeting, Bayes

CONTRIBUTED SESSIONS

Exact Techniques in Monte Carlo Sampling and InferenceOrganizer: Krys LatuszynskiExact Simulation of the Wright-Fisher Diffusion

Paul Jenkins(University of Warwick, UK)

TheWright-Fisher family of diffusion processes is a class of evolutionarymodels widely used in populationgenetics, with applications also in finance and Bayesian statistics. Simulation and inference from thesediffusions is therefore of widespread interest. However, simulating a Wright-Fisher diffusion is difficultbecause there is no known closed-form formula for its transition function. In this talk I show how it ispossible to simulate exactly from the scalar Wright-Fisher diffusion with general drift, extending ideasbased on retrospective simulation. The key idea is to exploit an eigenfunction expansion representationof the transition function. This approach also yields methods for exact simulation from several processesrelated to the Wright-Fisher diffusion: (i) its moment dual, the ancestral process of an in-nite-leaf Kingmancoalescent tree; (ii) its infinite-dimensional counterpart, the Fleming-Viot process; and (iii) its bridges. Thisis joint work with Dario Spano.

Perfect Simulation from the Stationary Distribution of a Uniformly Ergodic MarkovChain with a Proper Atom

Anthony Lee(University of Warwick, UK)

We present simple and efficient strategies for simulating from the stationary distribution of a uniformlyergodic Markov chain with a proper atom but otherwise defined on a general state space. In particular, ifthere is a known lower bound of the probability of reaching the atom fromany state then one can implementperfect simulation in a variety of ways by using a Bernoulli factory. This is joint work with A. Doucet andK. Latuszynski and contained in http://arxiv.org/abs/1407.5770.

A Study in Scarlet and Other Shades of RedChang-han Rhee

(Georgia Institute of Technology, USA)We introduce a new class of Monte Carlo methods, which we call exact estimation algorithms. The newalgorithms provide unbiased estimators for equilibrium expectations associated with real-valued func-tionals defined on a Markov chain. We provide general algorithms for the class of positive Harris recurrentMarkov chains, and for chains that are contracting on average. For positive Harris recurrentMarkov chains,we also prove a variant of the Glivenko Cantelli theorem for our estimator. We further argue that our newapproach provides a significant theoretical relaxation relative to exact sampling methods.

22

Page 25: Lenzerheide,January 4-7 2016. - Drexel Universitymwl25/mcmskiV/McmskiV booklet.pdf · 2016. 1. 4. · This booklet contains the abstracts of the 6th IMS-ISBA joint meeting, Bayes

Practical Unbiased Monte Carlo for Intractable ModelsSebastian Vollmer

(University of Oxford, UK)We will consider the problem of unbiased estimation of expectations with respect to measures which areonly available as limits of distributions. We will build on recent work by Peter Glynn and Chang-han Rhee.In particular, we will first discuss how it is possible to remove the bias introduced by discretization, whencomputing expectations with respect to Gaussian measures in function space. Then, we will discuss howto remove the bias due to burn-in, when computing expectations with respect to limiting distributions ofMarkov chains. If time permits it, we will also hint how to combine in order to perform estimation ofexpectations with respect to limiting distributions of Markov chains in function space, which is unbiasedwith respect to both discretization and burn-in. This is joint work with Sergios Agapiou and Gareth Robertsand contained in arXiv:1411.7713.

Probabilistic Numerics: Integrating Inference with IntegrationOrganizers: Mike Osborne, Chris Oates and François-Xavier BriolProbabilistic Numerics: Treating Numerical Computation as Learning

Roman Garnett(Washington University)

This talk will introduce the probabilistic numerics framework. Probabilistic numerics interprets numericalprocedures (e.g. optimisation, linear algebra, integration) as demanding Bayesian inference. This inter-pretation allows: uncertainty management at all levels of an algorithm; for the benefits of structure innumerical tasks to be realised, and; for no more costly computation to be allocated to any constituentnumerical algorithm than is necessary to achieve our overall goals. The talk will particularly focus on re-cent work in probabilistic approaches to numerical integration: Bayesian quadrature, a robust alternativeto MCMC methods. Applications of the techniques will be demonstrated to domains including astrometryand sensor networks, illustrating the superior wall-clock performance of probabilistic numeric techniques.

On the Relation between Bayesian and Classical QuadraturesSimo Sarkka

(Aalto University, Finland)Bayesian quadratures can be seen as generalizations of classical quadratures. In this talk we show howclassicalmultivariate numerical integrationmethods such as spherically symmetric cubaturemethods andmultivariate Gaussian quadratures can be constructed as Bayesian quadratures with specific probabilisticmodels. We also discuss different, including probabilistic, criteria for selecting the evaluation locations:exactness formultivariate polynomials up to a given order, minimum average error, and quasi-random pointsets. Additionally, we discuss some interesting theoretical results arising from this connection.

23

Page 26: Lenzerheide,January 4-7 2016. - Drexel Universitymwl25/mcmskiV/McmskiV booklet.pdf · 2016. 1. 4. · This booklet contains the abstracts of the 6th IMS-ISBA joint meeting, Bayes

Obtaining Probabilistic Integration Rules from Monte Carlo-based MethodsFrançois-Xavier Briol

(University of Warwick, UK)The field of Probabilistic Numerics focuses on the study of numerical problems from the point of view ofinference. This is often done through a Bayesian framework where the incorporation of prior information,such as the smoothness of the integrand, can allow for improved performance over other methods. In thecase of quadrature, this means we obtain estimators for the value of integrals together with a measureof our uncertainty over the result, which takes the form of a posterior variance. This quantification ofuncertainty allows the user to allocate computational resources appropriately and to decide when to stopobtaining new samples to improve the approximation of the integral. This talkwill discuss howmanyMonteCarlo-basedmethods, such asMarkov Chain Monte Carlo, Quasi-Monte Carlo or HamiltonianMonte Carlo,can be modified in order to obtain probabilistic integration rules. It will also provide theoretical resultsproving the procedure improves the rates of convergence and will discuss how those results affect thecontraction rates of the corresponding probabilistic integration rules. The talk will also discuss how someof these constructions also extend to intractable distributions and provide an alternative to MCMC.

Bayesian Inference for Big Environmental DataOrganizer: Dorit Hammerling

Multi-resolution Approximations for Big Spatial DataMatthias Katzfuss

(Texas A&M University)Remote-sensing instruments have enabled the collection of big spatial data over large spatial domainssuch as entire continents or the globe. Basis-function representations are well suited to big spatial data,as they can enable fast computations for large datasets and they provide flexibility to deal with the com-plicated dependence structures often encountered over large domains. We propose two related multi-resolution approximations (MRAs) that use basis functions atmultiple resolutions to achieve fast Bayesianinference and that can (approximately) represent any covariance structure. The first MRA results in amulti-resolution taper that can deal with large datasets. The second MRA is based on a multi-resolutionpartitioning of the spatial domain and can deal with truly massive datasets, as it is highly scalable andamenable to parallel computations on modern distributed computing systems.

24

Page 27: Lenzerheide,January 4-7 2016. - Drexel Universitymwl25/mcmskiV/McmskiV booklet.pdf · 2016. 1. 4. · This booklet contains the abstracts of the 6th IMS-ISBA joint meeting, Bayes

Identifying Trends in the Spatial Errors of a Regional Climate Model via ClusteringVeronica Berrocal

(University of Michigan)Since their introduction in 1990, regional climate models (RCMs) have been widely used to study the im-pact of climate change on human health, ecology, and epidemiology. To ensure that the conclusions ofimpact studies are well founded, it is necessary to assess the uncertainty in RCMs. This is not an easytask since two major sources of uncertainties can undermine an RCM: uncertainty in the boundary condi-tions needed to initialize the model and uncertainty in the model itself. Building upon the work of Berrocalet al. (2012), in this paper we present a statistical modeling framework to assess an RCM driven by anal-yses. More specifically, our scientific interest here is determining whether there exist time periods duringwhich the RCM in consideration displays the same type of spatial discrepancies from the observations.The proposed model can be seen as an exploratory tool for atmospheric modelers to identify time peri-ods that require a further in depth examination. Focusing on seasonal average temperature and seasonalmaximum temperature, our model relates the corresponding observed seasonal fields to the RCM outputvia a hierarchical Bayesian statistical model that includes a spatio-temporal calibration term. The latter,which represents the spatial error of the RCM, is in turn provided with a Dirichlet process prior, enablingclustering of the errors in time. On the first level of the hierarchy, the model specifies a normal distributionfor seasonal average temperature and a continuous GEV spatial process for seasonal maximum temper-ature. We apply our modeling framework to data from Southern Sweden spanning the period December1, 1962 to November 30, 2007. Our analysis reveals intriguing tendencies with respect to the RCM spatialerrors relative to seasonal average temperature; on the other hand, no systematic spatial error is detectedfor seasonal maximum temperature during the period 1963-2007.

Information from Cosmology ExperimentsAdam Amara(ETH Zürich)

Abstract: Bayesian statistical methods have become common place in cosmology, and numerous newexperiments have reported posterior results on cosmological parameters. With all of thesemeasurementswe can ask basic questions, such as: How much have given experiments contributed to our knowledgeof the Universe? And are the results from these different experiments consistent with each other? I willpresent a summary of relative entropy and how this powerful statistical tool can be used to condensecomplex results to address these important questions. To demonstrate this tool, I will show results fromthe CMB [1, 2], before moving to large scale structure measures, where we are now able to robustly rankorder the contributions to our knowledge of the Universe from all the latestmeasurements [3]. To conclude,I will focus on the Dark Energy Survey for which we have recently published our first cosmology results [4].[1] Seehars, S., Amara, A., Refregier, R., Paranjape, A., Akeret, J. Information gains from cosmic mi-crowave background experiments. Physical Review D, Volume 90, Issue 2, 2014.[2] Seehars, S., Grandis, S., Amara, A., Refregier, R. Quantifying Concordance. Submitted to PRD(arXiv:1510.08483), 2015.[3] Grandis, S., Seehars, S., Refregier, R., Amara, A., Nicola, A. Information Gains from CosmologicalProbes. Submitted to JCAP (arXiv:1510.06422), 2015.[4] Dark Energy Survey (DES) Collaboration. Cosmology from Cosmic Shear with DES Science Verifi-cation Data. Submitted to PRD (arXiv:1507.05552), 2015

25

Page 28: Lenzerheide,January 4-7 2016. - Drexel Universitymwl25/mcmskiV/McmskiV booklet.pdf · 2016. 1. 4. · This booklet contains the abstracts of the 6th IMS-ISBA joint meeting, Bayes

Observation-based Blended Projections from Ensembles of Regional Climate ModelsDorit Hammerling

(National Center for Atmospheric Research)We consider the problem of projecting future climate from ensembles of regional climate model (RCM)simulations using results from theNorth American Regional Climate Change Assessment Program (NARC-CAP). To this end, we develop a hierarchical Bayesian space-time model that quantifies the discrepanciesbetween different members of an ensemble of RCMs corresponding to present day conditions, and ob-servational records. Discrepancies are then propagated into the future to obtain high resolution blendedprojections of 21st century climate. In addition to blended projections, the proposed method provideslocation-dependent comparisons between the different simulations by estimating the different modes ofspatial variability, and using the climate model-specific coefficients of the spatial factors for comparisons.We demonstrate the methodology with simulations from the Weather Research & Forecasting regionalmodel (WRF) using three different boundary conditions. We use simulations for two time periods: cur-rent climate conditions, covering 1971 to 2000, and future climate conditions under the Special Report onEmissions Scenarios (SRES) A2 emissions scenario, covering 2041 to 2070. We investigate and projectyearly mean summer and winter temperatures for a domain in the South West of the United States.This is joint work with Esther Salazar, Bruno Sanso, Xia Wang, Andrew Finley, and Linda Mearns.

Bayesian Computation for Spatiotemporal ModelsOrganizer: Galin JonesOn Nearest-Neighbor Gaussian Process Models for High-Dimensional

Spatiotemporal DatasetsSudipto Banerjee

(University of California, Los Angeles)With the growing capabilities of Geographical Information Systems (GIS) and user-friendly software, statis-ticians today routinely encounter geographically referenced datasets containing observations from a largenumber of spatial locations and time points. Over the last decade, hierarchical spatiotemporal processmodels have become widely deployed statistical tools for researchers to better understanding the com-plex nature of spatial and temporal variability. However, fitting hierarchical spatiotemporal models ofteninvolves expensive matrix decompositions whose computational complexity increases in cubic order withthe number of spatial locations and temporal points. This renders such models infeasible for large datasets. In this talk, I will present two approaches for constructing well-defined spatiotemporal stochasticprocesses that accrue substantial computational savings. Both these processes can be used as “priors”for spatiotemporal random fields. The first approach constructs a low-rank process operating on a lower-dimensional subspace. The second approach constructs a Nearest-Neighbor Gaussian Process (NNGP)that can be exploited as a dimension-reducing prior embedded within a rich and flexible hierarchical mod-eling framework to deliver exact Bayesian inference. Both these approaches lead to Markov chain MonteCarlo algorithms with floating point operations (flops) that are linear in the number of spatial locations(per iteration). We compare these methods and demonstrate its use in inferring on the spatiotemporaldistribution of forest biomass from the US Forest Inventory database spanning the continental US. Jointwork with Abhirup Datta and Andrew O. Finley.

26

Page 29: Lenzerheide,January 4-7 2016. - Drexel Universitymwl25/mcmskiV/McmskiV booklet.pdf · 2016. 1. 4. · This booklet contains the abstracts of the 6th IMS-ISBA joint meeting, Bayes

Toward Efficient MCMC for Some High-dimensional Latent Variable ModelsMurali Haran

(Penn State University, USA)Among the great successes of Markov chain Monte Carlo (MCMC) methods is their ability to fit latentvariable models. In practice, however, if the number of latent variables is large, designing an efficientMCMC algorithm becomes very difficult and MCMC generally becomes prohibitively expensive. I will dis-cuss some approaches for addressing this challenge in latent variable models for spatial data. Among themethods I will describe are reparameterization and dimension-reduction approaches as well as compositelikelihood-based methods.

Calibrating an Ice Sheet Model Using High-dimensional Binary Spatial DataWon Chang

(University of Chicago)Rapid retreat of ice in the Amundsen Sea sector of West Antarcticamay cause drastic sea level rise, posingsignificant risks to populations in low-lying coastal regions. Calibration of computer models representingthe behavior of the West Antarctic Ice Sheet is key for informative projections of future sea level rise. How-ever, both the relevant observations and themodel output are high-dimensional binary spatial data; existingcomputer model calibration methods are unable to handle such data. Here we present a novel calibrationmethod for computer models whose output is in the form of a binary spatial pattern. To mitigate the com-putational and inferential challenges posed by our approach, we apply a generalized principal componentbased dimension reduction method. To demonstrate the utility of our method, we calibrate the PSU3D-ICEmodel by comparing the output from a 499-member perturbed-parameter ensemble with observationsfrom the Amundsen Sea sector of the ice sheet. Our methods help rigorously characterize the parameteruncertainty even in the presence of systematic data-model discrepancies and dependence in the errors.Our method also helps inform environmental risk analyses by contributing to improved projections of sealevel rise from the ice sheets.

Fast, Fully Bayesian Spatiotemporal Inference for fMRI DataJohn Hughes

(University of Minnesota, USA)We propose a spatial Bayesian variable selection method for detecting BOLD activation in fMRI data. Typ-ical fMRI experiments generate large datasets that exhibit complex spatial and temporal dependence.Fitting a full statistical model to such data can be so computationally burdensome that many practition-ers resort to fitting oversimplified models, which can lead to lower quality inference. We develop a fullstatistical model that permits efficient computation. Our approach eases the computational burden intwo ways. We partition the brain into three-dimensional parcels and fit our model to the parcels in paral-lel. Voxel-level activation within each parcel is modeled as regressions located on a lattice. Regressorsrepresent the magnitude of change in blood oxygenation in response to a stimulus, while a latent indi-cator for each regressor represents whether the change is zero or nonzero. A sparse spatial generalizedlinear mixed model (SGLMM) captures the spatial dependence among indicator variables within a parceland for a given stimulus. The sparse SGLMM permits considerably more efficient computation does thespatial model typically employed in fMRI. Through simulations we show that our parcellation scheme per-forms well in various realistic scenarios. Importantly, indicator variables on the boundary between parcelsdo not exhibit edge effects. We conclude by applying our methodology to data from a task-based fMRIexperiment.

27

Page 30: Lenzerheide,January 4-7 2016. - Drexel Universitymwl25/mcmskiV/McmskiV booklet.pdf · 2016. 1. 4. · This booklet contains the abstracts of the 6th IMS-ISBA joint meeting, Bayes

Recent Developments in Markov Chain Monte Carlo MethodologyOrganizer: James FlegalOn Convergence Diagnostics for Adaptive MCMC

Winfried Barta(George Washington University, USA)

Markov chainMonte Carlo (MCMC)methods are frequently used to approximately simulate high-dimensional,multimodal probability distributions. In adaptive MCMC, the transition kernel is changed "on the fly" in thehope to speed up convergence. We study interacting tempering, an adaptive MCMC algorithm based oninteracting Markov chains, which can be seen as a simplified version of the equi-energy sampler.Under easy to verify assumptions on the target distribution (on a finite space), we show that the inter-acting tempering process rapidly forgets its starting distribution. This holds true in many settings wherethe process is known to converge exponentially slowly to its limiting distribution. Consequently, we arguethat convergence diagnostics that are based on demonstrating that the process has forgotten its startingdistribution (as, for example, the popular Gelman-Rubin diagnostic) might be of limited use for adaptiveMCMC algorithms like interacting tempering.

Geometric Convergence of Gibbs Samplers for Bayesian Scale-Usage ModelsAndrew N. Olsen1 and Radu Herbei2

(1Apple, Inc.; 2The Ohio State University, USA)In many surveys, respondents differ fundamentally in the way they use the provided ratings scale, and thusit is important to account for this heterogeneity in analyzing such data. In this talk we consider a class ofBayesian scale-usage models and analyze the corresponding Gibbs samplers used to perform statisticalinference. We show that for certain models, such algorithms enjoy a geometric rate of convergence.In addition, we study the practical performance of several scale-usage models and their correspondingsampling algorithms using a student satisfaction survey data set.

A Practical Sequential Stopping Rule for High-Dimensional MCMCJames M. Flegal

(University of California, Riverside, USA)A current challenge for many Bayesian analyses is determining when to terminate high- dimensionalMarkov chain Monte Carlo simulations. To this end, we propose using an automated sequential stop-ping procedure that terminates the simulation when the computational uncertainty is small relative to theposterior uncertainty. Further, we show this stopping rule is equivalent to stopping when the effective sam-ple size is sufficiently large. Such a stopping rule has previously been shown to work well in settings withposteriors of moderate dimension. In this talk, we illustrate its utility in high-dimensional simulations whileovercoming some current computational issues. As an example, we consider a Bayesian dynamic space-time model on weather station data. Our results show the sequential stopping rule is easy to implement,provides uncertainty estimates, and performs well in high-dimensional settings.

28

Page 31: Lenzerheide,January 4-7 2016. - Drexel Universitymwl25/mcmskiV/McmskiV booklet.pdf · 2016. 1. 4. · This booklet contains the abstracts of the 6th IMS-ISBA joint meeting, Bayes

A Comparison Theorem for Data Augmentation Algorithms with ApplicationsHee Min Choi1 & James P. Hobert2

(1University of California Davis, 2University of Florida)The data augmentation (DA) algorithm is considered a useful Markov chain Monte Carlo algorithm thatsometimes suffers from slow convergence. It is often possible to convert a DA algorithm into a sandwichalgorithm that is computationally equivalent to the DA algorithm, but convergesmuch faster. Theoretically,the reversible Markov chain that drives the sandwich algorithm is at least as good as the correspondingDA chain in terms of performance in the central limit theorem and in the operator norm sense. In thispaper, we use the sandwich machinery to compare two DA algorithms. In particular, we provide conditionsunder which one DA chain can be represented as a sandwich version of the other. Our results are used toextend Hobert and Marchev’s (2008) results on the Haar PX-DA algorithm and to improve the collapsingtheorem of Liu, Wong and Kong (1994) and Liu (1994).

Recent Advances in Sequential Monte CarloOrganizer: Anthony LeeContinuous-Time Importance Sampling

Paul Fearnhead(Lancaster University, UK)

We will introduce a new framework for sequential Monte Carlo, based on evolving a set of weighted par-ticles in continuous time. This framework can lead to novel versions of existing algorithms, such as An-nealed Importance Sampling and the Exact Algorithm for diffusions, and can be used as an alternativeto MALA for sampling from a target distribution of interest. These methods are amenable to the use ofsub-sampling, which can greatly increase their computational efficiency for big-data applications; and canenable unbiased sampling from a much wider-range of target distributions than existing approaches.

The Hierarchical Particle FilterAdam Johansen

(University of Warwick, UK)We describe a new strategy for (exactly) approximating the “optimal” block-sampling particle filter by ahierarchical application of Sequential Monte Carlo. In contrast to an algorithm previously developed bythe authors, the described strategy avoids the use of two subsidiary particle filters per time-step and pertop-level particle achieving both lower computational cost and better performance. Illustrative simulationresults will be presented. This is joint work with Arnaud Doucet.

29

Page 32: Lenzerheide,January 4-7 2016. - Drexel Universitymwl25/mcmskiV/McmskiV booklet.pdf · 2016. 1. 4. · This booklet contains the abstracts of the 6th IMS-ISBA joint meeting, Bayes

Fluctuations and Stability of Distributed Particle Filters with Local ExchangeKari Heine

(UCL)We study the distributed local exchange particle filter algorithm proposed by Bolic et. al 2005 that involvesgroups of particles of equal size. We establish a central limit theorem in the regime where the number ofgroups of fixed size tends to infinity. The expression for the asymptotic variance can be interpreted in termsof colliding Markov chains. This enables analytic and numerical study of the behaviour of the asymptoticvariance over time and a comparison with a benchmark algorithm consisting of independent bootstrapparticle filters. We also prove that subject to regularity conditions, if the group size tends to infinity whilethe number of groups is kept fixed, the convergence is uniform in time. Our asymptotic variance formulaalso enables us to construct counter-examples showing that, in general, similar time uniform convergencedoes not hold in the regime with fixed group size and increasing number of groups.

Pseudo-Marginal Monte Carlo OptimisationAxel Finke

(University of Cambridge)We extend existing Monte Carlo schemes for performing optimisation in latent variable settings, i.e. insituations in which the objective function is intractable. This often occurs when performing (marginal)maximum-likelihood or (marginal) maximum-a-posteriori estimation. To this end, we present a flexibleframework for combining the SAME algorithm from Doucet, Godsill & Robert (2002) with state-of the-artMCMC kernels such as pseudo-marginal MCMCor conditional SMC kernels. We also construct population-based approaches by incorporating these kernels into SMC samplers. This is joint work with A.M. Jo-hansen.

Recent Approximate MCMC AlgorithmsOrganizers: Paul Jenkins and Adam JohansenAn Overview of Noisy MCMC and SMC

Richard Everitt(University of Reading)

The development of exact approximate Monte Carlo methods, in which unbiased estimates of densitiesare used within Markov chain Monte Carlo (MCMC) or sequential Monte Carlo (SMC) algorithms withoutloss of exactness, is one of the most important recent innovations in the field. This talk gives an overviewof work on inexact approximations or noisy methods, where (often low variance) alternatives to unbiasedapproximations are used instead. In all cases the exactness of the algorithm is lost, but in some casesthis proves to be insignificant compared to computational savings or improved variance of estimates pro-duced by finite runs. In the context of many applied researchers accepting the use of other approximatemethods (such as approximate Bayesian computation) further investigation of the use of noisy methodsis warrented.

30

Page 33: Lenzerheide,January 4-7 2016. - Drexel Universitymwl25/mcmskiV/McmskiV booklet.pdf · 2016. 1. 4. · This booklet contains the abstracts of the 6th IMS-ISBA joint meeting, Bayes

Stability of Noisy Metropolis-HastingsFelipe Medina Aguayo

(University of Warwick, UK)Pseudo-marginal Markov chain Monte Carlo methods for sampling from intractable distributions havegained recent interest and have been theoretically studied in considerable depth. Their main appeal isthat they are exact, in the sense that they target marginally the correct invariant distribution. However,the pseudo-marginal Markov chain can exhibit poor mixing and slow convergence towards its target. Asan alternative, a subtly different Markov chain can be simulated, where better mixing is possible but theexactness property is sacrificed. This is the noisy algorithm, initially conceptualised asMonte Carlo withinMetropolis (MCWM), which has also been studied but to a lesser extent. In this talk we provide a furthercharacterisation of the noisy algorithm, with a focus on fundamental stability properties like positive recur-rence and geometric ergodicity. Sufficient conditions for inheriting geometric ergodicity from a standardMetropolis-Hastings chain are given, as well as convergence of the invariant distribution towards the truetarget distribution. This is joint work with Anthony Lee and Gareth Roberts.

Pseudo-likelihood Accelerated Pseudo-marginal Metropolis-HastingsJere Koskela

(University of Warwick, UK)The Metropolis-Hastings algorithm is a very successful and popular method for sampling posterior dis-tributions with an intractable normalising constant. The pseudo-marginal Metropolis-Hastings is a fur-ther extension to cases where the unnormalised likelihood is also intractable. Likelihood evaluations arereplaced with unbiased estimators, typically obtained by sequential Monte Carlo. Running a sequentialMonte Carlo algorithm for each MCMC step is computationally expensive. To alleviate this cost, we in-troduce the pseudo-likelihood accelerated algorithm, where an approximate acceptance probability is firstcomputed using a tractable pseudo-likelihood, and SMC estimators are only computed for proposals ac-cepted by this first stage. We investigate the efficiency and accuracy of the resulting algorithm via anexample from population genetics, where pseudo-likelihoods are readily available via the so-called Prod-uct of Approximate Conditionals approach. In particular, we compare the performance of the acceleratedalgorithm to the standard and noisy versions of pseudo-marginal Metropolis-Hastings.

On Markov Chain Monte Carlo for Tall DataRèmi Bardenet

(University of Oxford and University of Lille)Markov chain Monte Carlo methods are often deemed far too computationally intensive to be of any prac-tical use for big data applications, and in particular for inference on datasets containing a large numberof individual datapoints, also known as tall datasets. In the case where the model assumes independenceof the data, various approaches to scale up Metropolis-Hastings have been recently proposed in machinelearning and statistics. These approaches can be grouped in two categories: subsampling-based algo-rithms and divide-and-conquer approaches. After quickly reviewing the existing literature, I will detail asubsampling-based approach which samples from a distribution provably close to the posterior distribu-tion of interest, yet can require less than O(n) data point likelihood evaluations at each iteration for usefulstatistical models. The latter work is the sequel to (Bardenet, Doucet, Holmes, ICML’14).Joint work with Arnaud Doucet and Chris Holmes (Oxford).

31

Page 34: Lenzerheide,January 4-7 2016. - Drexel Universitymwl25/mcmskiV/McmskiV booklet.pdf · 2016. 1. 4. · This booklet contains the abstracts of the 6th IMS-ISBA joint meeting, Bayes

Modeling and Computing with Latent Feature Models and Repulsive Point ProcessesOrganizer: Peter MüllerLinear Response Methods for Accurate Covariance Estimates from Mean Field

Variational BayesRyan Giordano(UC Berkeley)

Mean field variational Bayes (MFVB) is a popular posterior approximation method due to its fast runtimeon large-scale data sets. However, it is well known that a major failing of MFVB is that it underestimatesthe uncertainty ofmodel variables (sometimes severely) and provides no information aboutmodel variablecovariance. We generalize linear responsemethods from statistical physics to deliver accurate uncertaintyestimates for model variables — both for individual variables and coherently across variables. We callour method linear response variational Bayes (LRVB). When the MFVB posterior approximation is in theexponential family, LRVB has a simple, analytic form, even for non-conjugate models. Indeed, we makeno assumptions about the form of the true posterior. We demonstrate the accuracy and scalability of ourmethod on a range of models for both simulated and real data.Scalable inference for nonparametric latent feature models

Sinead Williamson(UT Austin)

There has been a lot of recent work on parallelizable inference algorithms for nonparametric mixture andad-mixture models; however there has been relatively little work on parallel inference for nonparametriclatent feature models such as the Indian buffet process and the beta-negative Binomial process. Manyexisting parallelization techniques for nonparametric methods do not transfer to such models, due to dif-ficulties in partitioning the model into simpler sub-models.We describe a general class of hybrid inference algorithms for Bayesian nonparametric models thatcombine collapsed and uncollapsed samplers. This class of algorithms can be trivially parallelized with-out introducing unwanted approximations, and is applicable for a wide range of nonparametric modelsincluding the Indian Buffet process and the beta-negative Binomial process.Determinantal Point Process Priors for Latent Biologic Structure - Modeling and

Posterior SimulationYanxun Xu

(Johns Hopkins U. and UT Austin)We discuss the use of the determinantal point process (DPP) as a prior for latent structure in biomedi-cal models when the goal is to interpret latent features as biologically or clinically meaningful structure.A typical example are mixture models when the terms of the mixture are meant to represent clinicallymeaningful subpopulations (of patients, genes, etc.). The discussion first focuses on mixture models,but is more general about repulsive priors for latent structure. As a second specific example we considerfeature allocation models. We discuss three examples, including inference in mixture models for mag-netic resonance images (fMRI) and for protein expression using reverse phase protein arrays (RPPA); anda feature allocation model for gene expression using TCGA data. An important part of our argument areefficient and straightforward posterior simulation methods. We implement reversible jump Markov chainMonte Carlo (RJ MCMC) simulation for inference under the DPP prior through a variation of RJ algorithms,using a density with respect to the unit rate Poisson process.

32

Page 35: Lenzerheide,January 4-7 2016. - Drexel Universitymwl25/mcmskiV/McmskiV booklet.pdf · 2016. 1. 4. · This booklet contains the abstracts of the 6th IMS-ISBA joint meeting, Bayes

Computational Aspects in Bayesian NonparametricsOrganizer: Antonio LijoiModeling and Inference with Feature Allocation Models

Peter Müller1, Juhee Lee2, Yanxun Xu3 & Yuan Ji4(1U. of Texas, USA, 2UCSC, USA, 3Johns Hopkins University, USA, 4NorthShore Health System & U. Chicago, USA)We discuss an application of feature allocation models to inference for tumor heterogeneity. We use avariation of Indian buffet process models to facilitate model-based imputation of hypothetical subpopu-lations of tumor cells, characterized by unique sets of somatic mutations and/or structural variants likecopy number variations. Implementing posterior inference in this problem gives rise to several compu-tational challenges. We discuss solutions based on fractional Bayes factors, MAD Bayes small varianceasymptotics, and a reversible jump implementation for a determinantal point process.

Effective Bayesian Nonparametric Inference of Variants in Repetitive Time SeriesWesley Tansey

(University of Texas at Austin, USA)We often have sequences of data that exhibit shared, recurrent behaviors. For example, when weightlifting one typically performs multiple repetitions of the same exercise, or when performing a repetitivecognitive task in fMRI studies the same neural subnetworks will be functionally connected. To incorporatethe presence of an unknown number of common variations between time series, we construct an infinitemixture of HMMs. Time series are clustered based on their transitions and emissions; each cluster has adistinct transitionmatrix and collection of emission distributions. The benefits of this are twofold: by usinga hierarchical Bayesian model we obtain better predictive power by exploiting similarities between timeseries; and by partitioning our dataset we may learn interpretable sub-behaviors, such as distinct variantsof tasks or exercises. We present an efficient collapsed Gibbs sampler and present results demonstratingits superior performance and interpretability on both synthetic data and a real-world exercises tracked bya wearable fitness device.

A Moment-matching Ferguson & Klass AlgorithmJulyan Arbel1 and Igor Prünster2

(1,2Collegio Carlo Alberto, 2University of Torino)Completely randommeasures (CRM) represent the key building block of a wide variety of popular stochas-tic models and play a pivotal role in modern Bayesian Nonparametrics. A popular representation of CRMsas a random series with decreasing jumps is due to [Ferguson and Klass, Ann. Math. Stat. 43 (5) (1972)1634–1643]. This can immediately be turned into an algorithm for sampling realizations of CRMs or moreelaboratemodels involving transformed CRMs. However, concrete implementation requires to truncate therandom series at some threshold resulting in an approximation error. I will show in this presentation how toquantify the quality of the approximation by a moment-matching criterion, which consists in evaluating ameasure of discrepancy between actual moments and moments based on the simulation output. Seen asa function of the truncation level, themethodology can be used to determine the truncation level needed toreach a certain level of precision. The resulting moment-matching Ferguson and Klass algorithm is thenimplemented on several popular Bayesian nonparametric models.

33

Page 36: Lenzerheide,January 4-7 2016. - Drexel Universitymwl25/mcmskiV/McmskiV booklet.pdf · 2016. 1. 4. · This booklet contains the abstracts of the 6th IMS-ISBA joint meeting, Bayes

Hazard Mixture Models for the Analysis of Clustered Time-to-event dataBernardo Nipoti1, Alejandro Jara2 and Michele Guindani3

(1 Collegio Carlo Alberto, Moncalieri, Italy; 2 Pontificia Universidad Catolica de Chile, Santiago, Chile; 3 TheUniversity of Texas MD Anderson Cancer Center, Houston, TX, USA)A standard approach for dealing with correlated time-to-event data within the proportional hazards (PH)context has been the introduction of a random effect (frailty) common to subjects within the same cluster.PHmodels with shared random effects have been widely discussed because they provide useful summaryinformation in the absence of estimates of a baseline survival distribution. However, when the frailty vari-ables are assumed independent and identically distributed, a rather rigid marginal association structurefor the clustered variables is induced, implying equal intra-cluster dependence as well as between-clusterheterogeneity. We propose an alternative Bayesian semiparametric model that naturally accommodatesfor different degrees of association, allowing for covariate-dependent association structures within eachcluster. The proposal is based on the introduction of cluster-dependent random hazard functions andon the use of mixture models induced by independent σ-stable completely random measures. The pro-posed model class has the appealing property of preserving marginally the PH structure and its analyticaltractability represents the key point for devising an efficient Markov Chain Monte Carlo algorithm for pos-sibly right-censored observations. The model is illustrated by an extensive simulation study as well as anapplication to a dataset of last survivor life insurance policies.

Model Selection and Advanced Scientific ComputationOrganizer: Donatello Telesca

ABC for High-dimensional Selection of Irregular ModelsDavid Rossell

(University of Warwick, UK)An important limiting factor in implementing high-dimensional model selection is the computational bur-den in evaluating integrated likelihoods. A situation that is particularly challenging is when the integralis high-dimensional and the integrand is irregular, e.g. non-differentiable or highly multi-modal. In thesesituations Laplace approximations fail and sampling-based estimates are often too noisy to be of practi-cal use. We propose approximate Bayes computation strategies that are asymptotically Bayes sufficient(leading to no loss of information) and lead to very simple implementation. The associated formulationis also interesting in providing connections to frequentist testing strategies. As illustration we considerhigh-dimensional regression with flexible error distributions and mixture models. Our results show thatour algorithms precision and computational complexity scale well as dimensionality increases, even inthese irregular models.

34

Page 37: Lenzerheide,January 4-7 2016. - Drexel Universitymwl25/mcmskiV/McmskiV booklet.pdf · 2016. 1. 4. · This booklet contains the abstracts of the 6th IMS-ISBA joint meeting, Bayes

High-performance Computing for High-dimensional Bayesian Model SelectionMarc Suchard

(University of California, Los Angeles)High-dimensionalmodel selection often requires numerical approximation of integrated likelihoods. Theseapproximations rely on high-dimensional optimization and tricks from numerical analysis that are fre-quently unfamiliar to the Bayesian statistician whose training has traditionally focused on numerical inte-gration. Two techniques that stand out are the Majorization-Minimization algorithm for regression modelsand efficient numerical evaluation of continued fractions for continuous-time Markov chain models. Bothtechniques importantly provide an opportunity for very fine-scale parallelization on advancing comput-ing technologies, such as many-core processing. Several orders-of-magnitude improvement in computespeed on these technologies opens the door to fitting Bayesian models to massive data, including previ-ously impractical applications in large-scale observational healthcare, drug safety and comparative effec-tiveness research.

Bayesian Model Selection Beyond Linear ModelsDonatello Telesca

(University of California, Los Angeles)Model selection is pervasive in Statistics and sees Bayesian inference as a naturally well equipped infer-ential paradigm. However, the great majority of theoretical, methodological and computational advanceshave focused on linear models. We make an argument for moving beyond linear regression in large datasettings and discuss the methodological implications associated with the presence of non-linearities, in-teractions and missing data. Specific attention will be devoted to tree-based models, and their applicationto independent and dependent data settings.

Recent Advances in Variational Bayesian MethodsOrganizer: Tamara BroderickTrust-Region Updates for Streaming Variational Inference

Matt Hoffman(Creative Technologies Laboratory, Adobe)

Stochastic variational inference allows for fast posterior inference in complex Bayesian models. However,the algorithm is prone to local optima which can make the quality of the posterior approximation sensitiveto the choice of hyperparameters and initialization. We address this problem by replacing the naturalgradient step of stochastic variational inference with a trust-region update. We show that this leads togenerally better results and reduced sensitivity to hyperparameters. We also describe a new strategy forvariational inference on streaming data and show that here our trust-region method is crucial for gettinggood performance.

35

Page 38: Lenzerheide,January 4-7 2016. - Drexel Universitymwl25/mcmskiV/McmskiV booklet.pdf · 2016. 1. 4. · This booklet contains the abstracts of the 6th IMS-ISBA joint meeting, Bayes

Automatic Variational Inference in StanAlp Kucukelbir

(Columbia University)Variational inference is a scalable technique for approximate Bayesian inference. Deriving variational infer-ence algorithms requires tedious model-specific calculations; this makes it difficult to automate. We pro-pose an automatic variational inference algorithm, automatic differentiation variational inference (ADVI).The user only provides a Bayesianmodel and a dataset; nothing else. Wemake no conjugacy assumptionsand support a broad class of models. The algorithm automatically determines an appropriate variationalfamily and optimizes the variational objective. We implement ADVI in Stan (code available now), a prob-abilistic programming framework. We compare ADVI to MCMC sampling across hierarchical generalizedlinear models, nonconjugate matrix factorization, and a mixture model. We train the mixture model on aquarter million images. With ADVI we can use variational inference on any model we write in Stan.

Variational Approximations for Gaussian Process modelsJames Hensman

(University of Sheffield, UK)Gaussian process (GP) models provide a flexible modelling approach with a number of appealing proper-ties. Methodological development of GP models has focussed on approximating the posterior functionfor non-Gaussian likelihoods, and approximating the expensive inversion of the covariance matrix whenthe data are large in number. In this talk, I’ll present recent work which attacks both aspects, using bothfixed form variational approaches and asymptotically exact stochastic variational methods.

Streaming, Distributed Bayesian Nonparametric Inference via ComponentIdentificationTrevor Campbell

(MIT)Most popular Bayesian nonparametrics (BNPs) are, at their core, streaming, distributed models. BNPsemploy stochastic processes as their priors, thereby specifying that model growth should occur with in-creasing amounts of data; this captures our natural intuition in streaming data problems. BNPs also oftenpossess exchangeability properties, and thus conditional independences, that make them naturally suitedto distributed data processing. However, the vast majority of developments in posterior inference for BNPshave focused on the single batch setting, and only recently have the streaming or distributed settings beenstudied separately.This talk aims to bridge the gap between the settings, and will present a novel variational inference al-gorithm for a wide class of BNPs that is truly streaming, distributed, asynchronous, learning-rate-free, andtruncation-free. The major challenge in the development of the algorithm, and the focus of the talk, is thecombinatorial problem of component identification. Most BNP models contain some notion of a count-ably infinite set of “components” (e.g. clusters in a DP mixture model), and do not impose an inherentordering on them. Thus, in order to combine information about the components from multiple proces-sors, the correspondence between components must first be found. This talk will present an optimizationproblem to find this correspondence for a large class of BNP models, and an efficient solution techniquebased on Jensen’s inequality and linear programming. The talk will conclude with an application of thealgorithm to the Dirichlet process mixture model, with experimental results demonstrating its scalabilityand performance in practice.

36

Page 39: Lenzerheide,January 4-7 2016. - Drexel Universitymwl25/mcmskiV/McmskiV booklet.pdf · 2016. 1. 4. · This booklet contains the abstracts of the 6th IMS-ISBA joint meeting, Bayes

Advances in Adaptive MCMCOrganizer: Radu Craiu

Adaptive, Delayed-Acceptance MCMC for Targets with Expensive LikelihoodsChris Sherlock

(Lancaster University, UK)When conducting Bayesian inference, delayed acceptance (DA) Metropolis-Hastings (MH) algorithms andDA pseudo-marginal MH algorithms can be applied when it is computationally expensive to calculate thetrue posterior or an unbiased estimate thereof, but a computationally cheap approximation is available.A first accept-reject stage is applied, with the cheap approximation substituted for the true posterior inthe MH acceptance ratio. Only for those proposals which pass through the first stage is the computation-ally expensive true posterior (or unbiased estimate thereof) evaluated, with a second accept-reject stageensuring that detailed balance is satisfied with respect to the intended true posterior. In some scenariosthere is no obvious computationally cheap approximation. A weighted average of previous evaluationsof the computationally expensive posterior provides a generic approximation to the posterior. If only thek-nearest neighbours have non-zero weights then evaluation of the approximate posterior can be madecomputationally cheap provided that the points at which the posterior has been evaluated are stored ina multi-dimensional binary tree, known as a KD-tree. The contents of the KD-tree are potentially updatedafter every computationally intensive evaluation. The resulting adaptive, delayed-acceptance [pseudo-marginal] Metropolis-Hastings algorithm is justified both theoretically and empirically. Guidance on tuningparameters is provided and the methodology is applied to a discretely observed Markov jump processcharacterising predator-prey interactions and an ODE system describing the dynamics of an autoregula-tory gene network.

Adapting to Model StructureJim Griffin

(University of Kent)The importance of statistical methods which can adjust to the structure of a problem, such as variableselection methods in regression or mixture model basedmethods for clustering, is well understood. Therehas been less work on how MCMC algorithms might adjust or adapt to the problem at hand. In this talk,I will discuss some strategies for adaptive MCMC algorithms in problems where the model structure isdefined in a high-dimensional space. This involves the construction of proposals which can be easilysampled and which are able to express some features of the posterior distribution. Adaptive Monte Carlomethods will be used to tune these proposals.

37

Page 40: Lenzerheide,January 4-7 2016. - Drexel Universitymwl25/mcmskiV/McmskiV booklet.pdf · 2016. 1. 4. · This booklet contains the abstracts of the 6th IMS-ISBA joint meeting, Bayes

Adaptive MCMC For EveryoneJeff Rosenthal

(University of Toronto)Markov chain Monte Carlo (MCMC) algorithms, such as the Metropolis Algorithm and the Gibbs Sampler,are an extremely useful and popular method of approximately sampling from complicated probability dis-tributions. Adaptive MCMC attempts to automatically modify the algorithm while it runs, to improve itsperformance on the fly. However, such adaption often destroys the ergodicity properties necessary for thealgorithm to be valid. In this talk, we first illustrate basic MCMC algorithms using simple Java applets.We then discuss adaptive MCMC, and present results and examples concerning its ergodicity and effi-ciency. We close with some recent ideas which make adaptive MCMC more widely applicable for all usersin broader contexts.

Adaptation within Exact Approximations of MCMCMatti Vihola

(University of Jyväskylä, Finland)Exact approximations of MCMC algorithms are an emerging class of MCMC methods which include anadditional layer of approximation within a “standard” MCMC method, in such a way that the algorithmremains valid (i.e. retains correct limiting properties). Particle MCMC and pseudo-marginal MCMC meth-ods are instances of such algorithms. The talk is about developing adaptive methods within the EAMCMCcontext.

38

Page 41: Lenzerheide,January 4-7 2016. - Drexel Universitymwl25/mcmskiV/McmskiV booklet.pdf · 2016. 1. 4. · This booklet contains the abstracts of the 6th IMS-ISBA joint meeting, Bayes

BREAKING NEWS!

Gasynchronous Distributed Gibbs SamplingAlexander Terenin1, Daniel Simpson2, and David Draper1(1 University of California, Santa Cruz; 2 University of Bath, UK)

Gibbs sampling is a widely used MCMC method for numerically approximating Bayesian integrals; how-ever, naive Gibbs implementations do not parallelize readily to permit Bayesian analysis at large scale. Inthis paper, we present a novel scheme – Asynchronous Distributed Gibbs (ADG) sampling – that allows usto perform MCMC in a parallel fashion with no synchronization or locking, avoiding typical performancebottlenecks of parallel algorithms. Our method is especially attractive in settings in which each obser-vation has its own random effect, so that the problem dimension grows with the sample size. Examplesdemonstrating the effectiveness of ADG sampling include a Gaussian process regression with n = 71,500observations in which ADG sampling produced 10,000 draws from each of 143 single-CPU workers in onlyabout 20 minutes of clock time, thereby correctly reconstructing the true regression function with mini-mal Monte Carlo error; and a hierarchical mixed-effects regressionmodel with n = 1,000,000 observations,each of which had its own latent random effect that had to be sampled; when compared with a standardGibbs sampler that produced 1,000 iterations with 8 threads in 12 hours, ADG sampling with 12 workers (8threads each) yielded comparable Monte Carlo accuracy in only 2 hours.

MCMC for a Class of Infinite Dimensional ModelsAndriy Norets

(Brown University, USA)This paper develops an MCMC algorithm for the following model: for an integer m, θj ∈ Rd, θ1∞ =(θ1, θ2, . . .), and θ1m = (θ1, θ2, . . . , θm), the observables density p(yi|m, θ1∞) = p(yi|m, θ1m) and aprior Π(θ1m|m)Π(m) are specified. The objective is to explore the posterior Π(m, θ1m|Y ), where Y =(y1, . . . , yn). Reversible jump MCMC can in principle be used for this task; however, for complicatedmodels, developing suitable proposal distributions is very nontrivial. An example of such a model andour main motivation is a nonparametric conditional density model in which normal regressions are mixed,mixing weights are proportional to an exponent of a quadratic function of covariates, and a prior on thenumber of mixture components is specified.The innovation of the proposed algorithm is to design Π̃(θm+1∞|m, θ1m, Y ) that maximizes the ex-pected acceptance probability for Metropolis-Hastings moves between m and m ± 1 (Π̃ affects onlyMCMC and not Π(m, θ1m|Y )). The algorithm implemented for the conditional density model performswell on simulated data with acceptance rates for m in 0.5-10% range. The algorithm can also be usefulfor estimation of simpler models such as countable mixtures of normals.

39

Page 42: Lenzerheide,January 4-7 2016. - Drexel Universitymwl25/mcmskiV/McmskiV booklet.pdf · 2016. 1. 4. · This booklet contains the abstracts of the 6th IMS-ISBA joint meeting, Bayes

Quantum Computing and MCMCYazhen Wang

(University of Wisconsin-Madison)Quantum computation performs calculations by using quantum devices instead of electronic devices fol-lowing classical physics and used by classical computers. Although general-purpose quantum computersof practical scale may bemany years away, special-purpose quantum computers are being built with capa-bilities exceeding classical computers. One prominent case is the commercially available D-Wave quantumcomputer that is built to implement quantum annealing for solving optimization problems.A classical annealing approach is simulated annealing, which takes into account the relative configurationenergies and a fictitious time-dependent temperature when probabilistically exploring the immense searchspace by Markov chain Monte Carlo methods. For a given optimization problem its objective function tobe minimized is identified with the energy of a physical system, and we then give the physical system atemperature as an artificially-introduced control parameter. By reducing the temperature gradually froma high value to zero during the time evolution of simulated annealing, we wish to drive the system to thestate with the lowest value of the energy (objective function) and thus reach the solution of the optimiza-tion problem. Quantum annealing is based on the physical process of a quantum system whose lowestenergy, which is called a ground state of the system, represents the solution to an optimization problemposed. It starts with building a simple quantum system initialized in its ground state, and then moves thesimple system gradually towards the target complex system. According to quantum theory, as the systemgradually evolves, it tends to remain in a ground state, and hence measuring the state of the final sys-tem will yield an answer to the original optimization problem with some probability. The key idea behindquantum annealing is to replace thermal fluctuations in simulated annealing by quantum fluctuations viaquantum tunneling so that the system is kept close to the instantaneous ground state of the quantumsystem during the quantum annealing evolution, analog to the quasi-equilibrium state to be kept duringthe Markov chain Monte Carlo dynamic evolution of simulated annealing. This talk will present quantumannealing and its implementation by the D-Wave quantum computer. We will discuss sampling proper-ties of the D-Wave quantum computer and some related approximations of quantum annealing by variousMarkov chain Monte Carlo based annealing approaches.

Informed MCMC Proposals in Discrete SpacesGiacomo Zanella

(University of Warwick, UK)There is a big need for methodological results to guide practitioners in designing efficient Markov chainMonte Carlo (MCMC) algorithms in discrete spaces. For example it is still unclear how to extend gradient-based MCMC (e.g. Langevin and Hamiltonian schemes) to networks or partitions spaces.Motivated by this observation, we consider the problem of designing appropriate informed proposals indiscrete spaces. In particular: assuming perfect knowledge of the target measure, what is the optimalMetropolis-Hastings proposal given a fixed set of allowed moves? Under regularity assumptions on thetarget, we derive the class of asymptotically optimal proposal distributions, which we call Balanced Pro-posals (BPs). Such proposals are maximal elements, in terms of Peskun ordering, among proposals ob-tained as transformation of the target density (results in Zanella 2016). This class of proposals includesthe Langevin MCMC scheme and can be seen as a generalization of gradient-based methods to discreteframeworks. Preliminary versions of such algorithm were used in Zanella 2015 to sample from matchingspaces. Moreover the flexibility of the BP framework allows to design novel informed MCMC schemeseven in continuous space. In particular we discuss how to use BPs to design parallel MCMC proposals inhigh dimensions.

40

Page 43: Lenzerheide,January 4-7 2016. - Drexel Universitymwl25/mcmskiV/McmskiV booklet.pdf · 2016. 1. 4. · This booklet contains the abstracts of the 6th IMS-ISBA joint meeting, Bayes

Probabilistic Numerical Methods for the Solution of Nonlinear Partial DifferentialEquations

Jon Cockayne1, Chris Oates2, Tim Sullivan1 and Mark Girolami4(1 University of Warwick, UK; 2 University of Technology Sydney, AUS; 4 Freie UniversitÃďt Berlin, DE)

Recent work by Conrad et al. (2015) establishes probabilistic foundations for models of the numericalerror arising in the numerical solution by finite element approximation of ordinary and partial differentialequations (PDEs). Such methods are of particular interest for PDEs since explicit solutions are rarelyavailable, and obtaining numerical estimates at arbitrary precision is computationally infeasible. Thus, arigorous quantification of uncertainty in the approximate solution is important.We seek to extend this work to nonlinear PDEs. Nonlinearity precludes closed-form solutions for the pos-terior probability measure over solutions, and so MCMC must be employed to sample from the solutionspace. We develop methods for this sampling, and apply these to the solution of parameter inferenceproblems for nonlinear PDEs.

ABC Parameter Estimation: the One Problem One Forest ApproachJean-Michel Marin1, Pierre Pudlo2, Louis Raynal1, Mathieu Ribatet1 & Christian P. Robert3

(1 U. of Montpellier, IMAG, France 2 Aix-Marseille University, IMM, France 3 U. Paris Dauphine, CEREMADE, France)As statistical models tend to be increasingly more elaborated, it is not unusual that likelihood based ap-proaches for model fitting are no longer possible. Typically such situations arise as no explicit forms forthe likelihood are available. To bypass this hurdle, the last decade has seen several strategies amongwhich composite likelihoods in a frequentist framework and Approximate Bayesian Computation (ABC) ina Bayesian context are popular options. We focus on the later.Since its introduction in population genetics, the method has found an ever increasing range of applica-tions covering diverse types of complex models in various scientific fields. However, it suffers from twomajor difficulties. First, to ensure reliability of the method, the number of simulations is large; hence, itproves difficult to apply ABC for large datasets. Second, calibration has always been a critical step in ABCimplementation.Instead of using classical ABC methods to approximate the posterior distribution and then summarizingit, in this work we will consider another strategy which consists in using regression or quantile RandomForests (RF) that will be especially designed to the quantity of interest, such as posterior expectations,variances or quantiles.We choose RF for parameter estimation because RF regression and quantile methods were shown to bemostly insensitive both to strong correlations between predictors (here the summary statistics) and tothe presence of noisy variables, even in relatively large numbers, a characteristic that k-Nearest Neighborclassifiers lack.

41

Page 44: Lenzerheide,January 4-7 2016. - Drexel Universitymwl25/mcmskiV/McmskiV booklet.pdf · 2016. 1. 4. · This booklet contains the abstracts of the 6th IMS-ISBA joint meeting, Bayes

Beyond Worst-case Mixing Times for Markov ChainsMaxim Rabinovich, Aaditya Ramdas, Michael I. Jordan & Martin J. Wainwright

(University of California – Berkeley, USA)Methods based on Markov chains play a critical role in machine learning and statistics, where they formthe basis of Markov chain Monte Carlo (MCMC) algorithms for estimating posterior probabilities and ex-pectations. These algorithms are often limited by their dependence on approximate convergence of thechain to its stationary distribution. That convergence can both take an extremely long time—leading tohigh computational cost—or be difficult to assess—leading to poor control of estimation error. In this pa-per, we begin an investigation of localized mixing times that depend on the functions whose expectationswe would like to estimate. We first define function-dependent analogues of the classical global notionsof mixing time and spectral gap and show that these are related in the way one would expect from theglobal case. We then use our framework to prove concentration bounds for the out-of-equilibrium empiri-cal averages that depend only on the mixing properties of the target function, not of the entire chain. Weconclude with a discussion of the potential impact of this line of work.

Fast Likelihood-free Inference via Bayesian OptimizationMichael U. Gutmann and Jukka Corander

(University of Helsinki, Finland)Statistical models may be specified in terms of stochastic computer program – a simulator – which cangenerate samples from the model for any configuration of the parameters. While such models supportcomplex data generating mechanisms, the likelihood function is generally incomputable which rendersstatistical inference difficult. Several likelihood-free inference methods have been proposed which sharethe basic idea of identifying the model parameters by finding values for which the discrepancy betweensimulated and observed data is small. Examples are indirect inference and approximate Bayesian com-putation. A major obstacle to using these methods is their computational cost. The cost is largely dueto the need to repeatedly simulate data sets and the lack of knowledge about how the parameters affectthe discrepancy. We propose a strategy which combines probabilistic modeling of the discrepancy withoptimization to facilitate likelihood-free inference. The strategy is implemented using Bayesian optimiza-tion and is shown to accelerate the inference through a reduction in the number of required simulationsby several orders of magnitude.

Reference: M.U. Gutmann and J. Corander, Bayesian Optimization for Likelihood-Free Inference ofSimulator-Based Statistical Models, Journal of Machine Learning Research, 2015. arXiv:1501.03291

An Unbiased and Scalable Monte Carlo Method for Bayesian Inference for Big DataMurray Pollock1, Paul Fearnhead2, Adam M. Johansen1 & Gareth O. Roberts1

(1 University of Warwick; 2 Lancaster University)This poster will introduce novel methodology for exploring posterior distributions by modifying methodol-ogy for exactly (without error) simulating diffusion sample paths – the Scalable Langevin Exact Algorithm(ScaLE). This new method has remarkably good scalability properties (among other interesting proper-ties) as the size of the data set increases (it has sub-linear cost, and potentially no cost), and therefore isa natural candidate for “Big Data” inference.

42

Page 45: Lenzerheide,January 4-7 2016. - Drexel Universitymwl25/mcmskiV/McmskiV booklet.pdf · 2016. 1. 4. · This booklet contains the abstracts of the 6th IMS-ISBA joint meeting, Bayes

Coupling Monte Carlo Approximation and Optimization Methods for ChallengingInference Problems

Gersende Fort(LTCI, CNRS, Telecom ParisTech, Université Paris-Saclay, France)

In penalized likelihood estimation, we are faced with solving an optimization problem of the formargminθ∈Θ {−`+ g}

where ` is an intractable log-likelihood function and g is a penalty function. Intractability arises becausethe likelihood is known only up to a normalization factor, depending on the parameter to estimate, whichcannot be computed explicitly.This is the case when considering for example parameter inference in random fields or undirected graph-ical models. In such case, ` and ∇` are integrals with respect to some Gibbs probability measure πθ onsome measurable space X and known only up to the partition function Zθ (normalization constant).Intractable likelihood functions and intractable gradient also arise when dealing with hierarchical latentvariable models, including missing data or mixed effects models. Here again, `(θ) and ∇`(θ) are inte-grals w.r.t. a distribution πθ which represents the conditional distribution of the latent/missing variablesgiven the parameter and the data. In both cases, πθ not only depends on θ, but is often difficult to simulateand typically requires to use Markov Chain Monte Carlo (MCMC) methods.Another source of intractability arises when doing learning on a huge data set. In this case,−` is writtenas ` =∑Ni=1 `i whereN is the sample size. Even if∇` can be computed explicitly, it is more practical toreduce the computational cost by using Monte Carlo estimation of this quantity. A classical optimizationtool for solving problems of the form (1) when −`, g are convex is the Proximal Gradient algorithm. It isan iterative algorithm, each iteration combines a gradient step in the direction of∇` and a proximal stepto take into account the penalty term g. When∇` is intractable, an algorithmic solution is to approximatethe exact gradient by a Monte Carlo sum. What can be said on this Stochastic Proximal Gradient algo-rithm: does it converge to the same limiting points as the exact one? if it converges, what is its rate ofconvergence ? how many Monte Carlo samples at each iteration ?This talk/poster will address a perturbed version of the proximal gradient algorithm for which the gradient

∇` is not known in closed form and should be approximated. We will discuss the convergence and derivea non-asymptotic bound on the convergence rate for the perturbed proximal gradient. When the approxi-mation is achieved by using Monte Carlo methods, we derive conditions involving the Monte Carlo batchsize under which convergence is guaranteed. In particular, we show that the Monte Carlo approximationsof some proximal gradient algorithms achieve the same convergence rates as their deterministic counter-parts. To illustrate, we apply the algorithms to high-dimensional generalized linear mixed models using`1-penalization.The work is based on the paper “On Stochastic Proximal Gradient Algorithms” (arXiv:1402:2365 math.ST)by Yves Atchadé, Gersende Fort and Eric Moulines.

43

Page 46: Lenzerheide,January 4-7 2016. - Drexel Universitymwl25/mcmskiV/McmskiV booklet.pdf · 2016. 1. 4. · This booklet contains the abstracts of the 6th IMS-ISBA joint meeting, Bayes

Moment Conditions and Bayesian NonparametricsLuke Bornn1, Neil Shephard2 & Reza Solgi2

1 Simon Fraser University, Canada; 2 Harvard University, USAModels phrased though moment conditions are central to much of modern statistics and econometrics.Here these moment conditions are embedded within a nonparametric Bayesian setup. Handling such amodel is not probabilistically straightforward as the posterior has support on a manifold. We solve therelevant issues, building new probability and computational tools using Hausdorff measures to analyzethem on real and simulated data. These new methods can be applied widely, including providing Bayesiananalysis of quasi-likelihoods, linear and nonlinear regression and quantile regression, missing data, setidentified models, and hierarchical models.For a pre-print, see here: http://arxiv.org/abs/1507.08645

The Small Clustering Problem: When Cluster Sizes Don’t Grow with the Number ofData Points

Rebecca C. Steorts(Duke University, USA)

Most generative models for clustering implicitly assume that the number of data points in each clustergrows linearly with the total number of data points in the data set. Finite mixture models, Dirichlet pro-cess mixture models, and Pitman-Yor mixture models make this assumption, as do all other infinitelyexchangeable clustering models. However, for some applications, this assumption is undesirable. Forexample, when performing entity resolution—i.e., identifying duplicate records in large, noisy databases—the process responsible for each cluster is often unrelated to the size of the data set, and every clustercontains a negligible fraction of points even as the data set grows. Such applications require modelsthat yield small clusters. We address this issue by mathematically defining the small clusters property—aproperty of a sequence of random partitions—and introduce a new clustering model that possesses thisproperty. Furthermore, on large data sets with small clusters, standard inference algorithms for partition-based models run extremely slowly. We propose a novel algorithm that overcomes this exact problem.We compare the suitability of this model and several commonly used clustering models for applicationsinvolving small clusters by checking model fit using real and simulated data sets.This is joint work with Jeffrey Miller, Abbas Zaidi, Brenda Betancourt (Duke University), and HannaWallach(UMass Amherst and Microsoft Research).

44

Page 47: Lenzerheide,January 4-7 2016. - Drexel Universitymwl25/mcmskiV/McmskiV booklet.pdf · 2016. 1. 4. · This booklet contains the abstracts of the 6th IMS-ISBA joint meeting, Bayes

POSTERS

Posters will be presented in alphabetical order by presenting author:• Tuesday evening (21:00–23:30), Jan. 5: From Aquila to Noretsplus the Breaking News posters by Bornn, Cockayne, Fort, Gutmann, Marin• Wednesday evening (21:00–23:30), Jan. 6: From Pollack to Zanellaplus the Breaking News posters by Pollack, Rabinovich, Wang, Steorts, Terenin, Zanella

How to Shape Risk Appetite in Presence of Franchise Value?Cecilia Aquila and Giovanni Barone-Adesi

Università della Svizzera Italiana and Swiss Finance Institute

A Comparison of MCMC for Big DataJack Baker1, Paul Fearnhead1, Emily Fox2 and Christopher Nemeth1

1 Lancaster University, UK2 University of Washington, USA

Accelerating Metropolis–Hastings Algorithms by Delayed AcceptanceMarco Banterle1, Clara Grazian2, Anthony Lee3 and Christian P. Robert4

1 Université Paris Dauphine and CREST, France2 Sapienza università di Roma, Italy, Université Paris Dauphine, and CREST, France

3 University of Warwick, UK4 Université Paris Dauphine, and CREST, France and University of Warwick, UK

Bayesian Spatiotemporal Boundary Detection for Diagnosing Progression ofGlaucoma Using Visual Field Data

Samuel I. Berchuck1, Joshua L. Warren2 and Amy H. Herring1

1 University of North Carolina-Chapel Hill, USA2 Yale University, USA

Probabilistic Integration with theoretical GuaranteesFrançois-Xavier Briol

University of Warwick, UK

45

Page 48: Lenzerheide,January 4-7 2016. - Drexel Universitymwl25/mcmskiV/McmskiV booklet.pdf · 2016. 1. 4. · This booklet contains the abstracts of the 6th IMS-ISBA joint meeting, Bayes

Bayesian Approach to Co2 Retrievals for the Oco-2 Instrument Using a SurrogateForward-Model

Jenny Brynjarsdottir1, Amy Braverman2 and Jonathan Hobbs2

1 Case Western Reserve University, USA2 Jet Propulsion Laboratory, USA

Weighted Particle TemperingMarcos Carzolio and Scotland Leman

Virginia Tech, USA

Adaptive Gibbs SamplerCyril Chimisov, Krys Latuszynski and Gareth O. Roberts

University of Warwick, UK

A Conservative Variance Estimation Method for Multivariate MCMCNing Dai

University of Minnesota, USA

Approximate Bayesian Computation for Semi-Parametric ProblemsClara Grazian 1, 2 and Brunero Liseo1

1 MEMOTEF, Sapienza Università di Roma, Italy2 CEREMADE, Université Paris Dauphine, France

On the Identifiability of Transmission Dynamic Models for Infectious DiseasesJarno Lintusaari1, Michael U. Gutmann1, 2, Samuel Kaski1 and Jukka Corander2

1 HIIT, Aalto University, Finland2 HIIT, University of Helsinki, Finland

Block Hyper-G Priors in Bayesian RegressionChristopher M. Hans1, Agniva Som2 and Steven N. MacEachern1

1 The Ohio State University, USA2 Amazon India

46

Page 49: Lenzerheide,January 4-7 2016. - Drexel Universitymwl25/mcmskiV/McmskiV booklet.pdf · 2016. 1. 4. · This booklet contains the abstracts of the 6th IMS-ISBA joint meeting, Bayes

Bridging Between Variational Bayes and True Posterior Via MCMCDaniel Hernandez-Stumpfhauser1, David B. Dunson2 and Amy H. Herring1

1 University of North Carolina at Chapel Hill, USA2 Duke University, USA

Next-Generation Gibbs-Type Samplers: Combining Strategies to Boost EfficiencyXiyun Jiao and David A. van Dyk

Imperial College, London, UK

Non-informative Reparameterisations for Location-Scale MixturesKaniav Kamary1, Kate Lee2 and Christian P. Robert1

1 CEREMADE, Université Paris Dauphine, France2 Auckland University of Technology, New Zealand

Investigating Lateral Transfer on Phylogenetic Trees – Exact Inference UsingMassive Systems of Differential Equations

Luke Kelly and Geoff NichollsUniversity of Oxford, UK

Pseudo-Marginal Metropolis Light TransportJoel Kronander1, Thomas B. Schön2 and Jonas Unger1

1 Linköping University, Sweden2 Uppsala University, Sweden

A Bayesian Estimate of the Pricing KernelGiovanni Barone-Adesi1, Chiara Legnazzi1 and Antonietta Mira2

1 Università della Svizzera Italiana and Swiss Finance Institute2 InterDisciplinary Institute of Data Science, U. della Svizzera Italiana, Lugano, Switzerland and U. dell’Insubria, Italy

On the Asymptotic Behaviour of ABCWentao Li and Paul Fearnhead

Lancaster University, UK

47

Page 50: Lenzerheide,January 4-7 2016. - Drexel Universitymwl25/mcmskiV/McmskiV booklet.pdf · 2016. 1. 4. · This booklet contains the abstracts of the 6th IMS-ISBA joint meeting, Bayes

Baby Reversible Jump for Model ChoiceJohn C. Liechty1, Merrill W. Liechty2, Murali Huran1 and Ephraim Hanks1

1 Penn State University, USA2 Drexel University, USA

Bayesian Predictive Modeling for Personalized Treatment Selection in OncologyJunsheng Ma, Francesco Stingo, and Brian P. Hobbs

Department of Biostatistics, M.D. Anderson Cancer Center, Houston, TX, USA

How to Sample from a Distribution When Only the Moments Are Known with anApplication to Affine Models

Filippo Macaluso 1, Antonietta Mira2 and Paul Schneider11 Università della Svizzera Italiana, Lugano, Switzerland

2 InterDisciplinary Institute of Data Science, U. della Svizzera Italiana, Lugano, Switzerland and U. dell’Insubria, Italy

Regularized Supervised Topic Models for High-Dimensional Multi-Class RegressionMåns Magnusson1, Leif Jonsson2 and Mattias Villani1

1 Linköping University, Sweden2 Linköping University and Ericsson, Sweden

Adaptive Incremental Mixture Markov Chain Monte CarloFlorian Maire1, Nial Friel1, Antonietta Mira2 and Adrian Raftery3

1 University College Dublin, Ireland2 InterDisciplinary Institute of Data Science, U. della Svizzera Italiana, Lugano, Switzerland and U. dell’Insubria, Italy

3 University of Washington, USA

On Approximately Simulating Conditioned DiffusionsSean Malory and Chris Sherlock

Lancaster University, UK

48

Page 51: Lenzerheide,January 4-7 2016. - Drexel Universitymwl25/mcmskiV/McmskiV booklet.pdf · 2016. 1. 4. · This booklet contains the abstracts of the 6th IMS-ISBA joint meeting, Bayes

Cheeger Inequalities for the Mixing Times of Hamiltonian MCMCOren Mangoubi1 and Natesh Pillai2

1 Massachusetts Institute of Technology, USA2 Harvard University, USA

Accelerating Bayes Inference for Evolutionary Biology ModelsXavier Meyer1,2, Bastien Chopard1 and Nicolas Salamin2

1 University of Geneva, Switzerland2 University of Lausanne, Switzerland

Optimal Scaling of Particle and Pseudo-Marginal Metropolis-Adjusted LangevinAlgorithmsChris Nemeth

Lancaster University, UK

Variational Consensus Monte CarloMaxim Rabinovich, Elaine Angelino and Michael I. Jordan

University of California, Berkeley

A Gaussian Process Latent Variable Model for Single Cell Pseudotime EstimationJohn Reid and Lorenz Wernisch

MRC Biostatistics Unit, Cambridge, UK

Pseudo-Marginal MCMC for Parameter Estimation in α-Stable DistributionsMarina Riabiz, Fredrik Lindsten and Simon Godsill

Signal Processing and Communications Laboratory, Engineering Department, University of Cambridge, UK

Ensemble Kalman Particle Filter For Convective Scale Data AssimilationSylvain Robertand Hans R. Künsch

ETH Zürich, Switzerland

49

Page 52: Lenzerheide,January 4-7 2016. - Drexel Universitymwl25/mcmskiV/McmskiV booklet.pdf · 2016. 1. 4. · This booklet contains the abstracts of the 6th IMS-ISBA joint meeting, Bayes

From Data to Models in Conveying HIV/STD Prevalence Information to the Public:How Multi-Level Models Can Be Used To Improve Inference And Better Ensure the

Anonymity of Released InformationCody T. Ross1 and Karl J. Frost2

1 Santa Fe Institute, USA2 University of California, Davis, USA

Gradient Importance SamplingIngmar Schuster

Université Paris Dauphine, France

Some Contributions to Sequential Monte Carlo Methods for Option PricingDeborshee Sen

National University of Singapore, Singapore

A Bayesian Nonparametric Approach to the Analysis of High DimensionalLongitudinal Data SetsKan Shang and Cavan ReillyUniversity of Minnesota, USA

Bayes Estimates of the Diversification of International Markets with HierarchicalCopulas and Vines

Alexander Knyazev1, Oleg Lepekhin 1 and Arkady Shemyakin2

1 Astrakhan State University, Russia2 University of St. Thomas, USA

Increased Levels of Co-Infection Reavealed with an Approximate BayesianComputation Approach

Jukka Sirén1, Benoit Barrès1,2 and Anna-Liisa Laine1

1 University of Helsinki, Finland2 ANSES Lyon, France

50

Page 53: Lenzerheide,January 4-7 2016. - Drexel Universitymwl25/mcmskiV/McmskiV booklet.pdf · 2016. 1. 4. · This booklet contains the abstracts of the 6th IMS-ISBA joint meeting, Bayes

Inferring Smart Home User Status with Particle MCMCJonathan Steinhart1, 2

1 Austrian Institute of Technology, Austria2 Johannes Kepler University, Austria

Using Bayesian Computing to Solve a Complex Problem in Astrophysics1 David C. Stenning, Rachel Wagner-Kaiser2, David A. van Dyk3, Ted von Hippel4, Nathan Stein5, ElliotRobinson6 and Ata Sarajedini3

1 Sorbonne Université, UPMC-CNRS, UMR 7095, Institut d’Astrophysique de Paris, France2 Bryant Space Center, University of Florida, USA

3 Imperial College London, UK4 Embry-Riddle Aeronautical University, USA

5 The Wharton School, University of Pennsylvania, USA6 Argiope Technical Solutions, USA

Gradient-Free Hamiltonian Monte Carlo with Efficient Kernel Exponential FamiliesHeiko Strathmann1, Dino Sejdinovic2, Samuel Livingstone3, Zoltan Szabo1 and Arthur Gretton1

1 Gatsby Unit, University College London2 Department of Statistics, University of Oxford3 School of Mathematics, University of Bristol

Improving the Efficiency of the Parallel Tempering AlgorithmNicholas Tawn and Gareth Roberts

University of Warwick, UK

Accuracy and Validity of Posterior Quantiles in Bayesian Inference Using EmpiricalLikelihoods

Laura Turbatu and Elvezio RonchettiUniversity of Geneva, Switzerland

Spatio-temporal Species Distribution Model to Detect Outbreaks of CoralConsuming Crown-of-Thorns Starfish in the Great Barrier Reef

Jarno Vanhatalo1, Geoff Hosack2 and Hugh Sweatman3

1 University of Helsinki, Finland2 CSIRO Marine Laboratories, Australia

3 Australian Institute of Marine Science, Australia

51

Page 54: Lenzerheide,January 4-7 2016. - Drexel Universitymwl25/mcmskiV/McmskiV booklet.pdf · 2016. 1. 4. · This booklet contains the abstracts of the 6th IMS-ISBA joint meeting, Bayes

Multivariate Output Analysis for Markov Chain Monte CarloDootika Vats1, James M. Flegal2, and Galin L. Jones1

1 University of Minnesota, USA2 University of California, Riverside, USA

A Comparison of Different Strategies for a Particle Markov Chain Monte CarloAlgorithm: Application to Plant Growth Models

Gautier Viaud1 and Paul–Henry Cournède1

1 Laboratory MICS, CentraleSupélec, France

Likelihood-Free Methods for Stochastic Models of Collective Cell Spreading1,2 Brenda N. Vo

1 Mathematical Sciences, Queensland University of Technology (QUT), Brisbane, Australia2 ARC Centre of Excellence for Mathematical and Statistical Frontiers (ACEMS), QUT, Brisbane, Australia

On the Poisson Equation for Metropolis-Hastings ChainsAleksandar Mijatović and Jure Vogrinc

Imperial College London, UK

Self-Tuning Metropolis-Hastings MovesChristopher Sherlock and Lianting Xue

Lancaster University, UK

Assessing Monte Carlo Standard Error in Diffusion Meganetic Resonance ImagingYang Yang

University of Minnesota, USA

52

Page 55: Lenzerheide,January 4-7 2016. - Drexel Universitymwl25/mcmskiV/McmskiV booklet.pdf · 2016. 1. 4. · This booklet contains the abstracts of the 6th IMS-ISBA joint meeting, Bayes

Author Index

AguayoFelipe, 31AkhmatskayaElena, 13AmaraAdam, 25ArbelJulyan, 33BacalladoSergio, 16BanerjeeSudipto, 26BardenetRèmi, 31BartaWinfried, 28BerrocalVeronica, 25BetancourtMichael, 12BierkensJoris, 20BornnLuke, 15, 44BriolFrançois-Xavier, 24BroderickTamara, 17CampbellTrevol, 36CaronFrançois, 16ChangWon, 27ChoiHee Min, 29CockayneJon, 41CoranderJukka, 17, 42

DraperDavid, 39DunsonDavid, 10EverittRichard, 30FearnheadPaul, 29, 42FinkeAxel, 30FlegalJames, 28FortGersende, 43GarnettRoman, 23GerberMathieu, 15GiordanoRyan, 32GirolamiMark, 41GriffinJim, 37GrueblerMartin, 14GuidaniMichele, 34GutmannMichael, 42HaarioHeikki, 21HammerlingDorit, 26HaranMurali, 27HeineKari, 30

53

Page 56: Lenzerheide,January 4-7 2016. - Drexel Universitymwl25/mcmskiV/McmskiV booklet.pdf · 2016. 1. 4. · This booklet contains the abstracts of the 6th IMS-ISBA joint meeting, Bayes

HensmanJames, 36HerbeiRadu, 28HobertJames, 29HoffmanMatt, 35JaraAlejandro, 34JascheJens, 14JenkinsPaul, 22Ji Yuan, 33JohansenAdam, 29, 42JordanMichael, 10KamataniKengo, 20KatzfussMatthias, 24KellyLuke, 21Korner-NievergeltFranzi, 14KoskelaJere, 31KucukelbirAlp, 36LatuszynskiKrys, 11Lee Anthony, 22Juhee, 33LelièvreTony, 11LivingstoneSam, 13MüllerPeter, 33MackeyLester, 15MarinJean-Michel, 41Martin

Gael, 19Naef-DaenzerBeat, 14NguyenXuanLong, 16NichollsGeoff, 21NipotiBernardo, 34NoretsAndriy, 39OatesChris, 19, 41OlsenAndrew, 28OwenArt, 12, 15PereyraMarcelo, 20PollockMurray, 42PrünsterIgor, 33PudloPierre, 41RabinovichMaxim, 42RadivojevicTijana, 13RaynalLouis, 41RheeChang-han, 22RibatetMathieu, 41RobertChristian, 41RobertsGareth, 42RosenthalJeff, 38RossellDavid, 34SarkkaSimo, 23ScottSteven, 11

54

Page 57: Lenzerheide,January 4-7 2016. - Drexel Universitymwl25/mcmskiV/McmskiV booklet.pdf · 2016. 1. 4. · This booklet contains the abstracts of the 6th IMS-ISBA joint meeting, Bayes

SherlockChris , 37SimpsonDaniel, 39SpantiniAlessio, 21SteortsRebecca, 44StingoFrancesco, 18SuchardMarc, 35SullivanTim, 41TanseyWesley, 33TelescaDonatello, 35Terenin

Alexander, 39ViholaMatti, 38Voi Brenda, 19VollmerSebastian, 23WangYazhen, 40WilliamsonSinead, 32Xu Yanxun, 32, 33ZanellaGiacomo, 40ZucknickManuela, 18

55