Skip to content

Commit

Permalink
Edit ABC part of text based on Ollie's suggestions. Cite Nott et al.,…
Browse files Browse the repository at this point in the history
… 2018 for Chapter 8 on High Dimensional ABC of Sisson book.
  • Loading branch information
lm2612 committed Apr 4, 2024
1 parent ae4a899 commit ff7990c
Show file tree
Hide file tree
Showing 2 changed files with 12 additions and 1 deletion.
11 changes: 11 additions & 0 deletions paper.bib
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,17 @@ @book{Sisson:2018
year = {2018}
}

@incollection{Nott:2018,
author = {Nott, David J. and Ong, Victor M.-H. and Fan, Y. and Sisson, S. A.},
title = {High-{Dimensional} {ABC}},
booktitle = {Handbook of {Approximate} {Bayesian} {Computation}},
isbn = {978-1-315-11719-5},
chapter = {8},
pages = {211-241},
year = {2018},
publisher = {CRC Press},
}

@article{Cleary:2021,
title = {Calibrate, emulate, sample},
journal = {Journal of Computational Physics},
Expand Down
2 changes: 1 addition & 1 deletion paper.md
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ Computationally expensive computer codes for predictive modelling are ubiquitous

In Julia there are a few tools for performing non-accelerated uncertainty quantification, from classical sensitivity analysis approaches, e.g., [UncertaintyQuantification.jl](https://zenodo.org/records/10149017), GlobalSensitivity.jl [@Dixit:2022], and MCMC, e.g., [Mamba.jl](https://github.com/brian-j-smith/Mamba.jl) or [Turing.jl](https://turinglang.org/). For computational efficiency, ensemble methods also provide approximate sampling (e.g., the Ensemble Kalman Sampler [@Garbuno-Inigo:2020b;@Dunbar:2022a]) though these only provide Gaussian approximations of the posterior.

Accelerated uncertainty quantification tools also exist for the related approach of Approximate Bayesian Computation (ABC), e.g., GpABC [@Tankhilevich:2020] or [ApproxBayes.jl](https://github.com/marcjwilliams1/ApproxBayes.jl?tab=readme-ov-file); these tools both approximately sample from the posterior distribution. In ABC, this approximation comes from bypassing the likelihood that is usually required in sampling methods, such as MCMC. Instead, the goal of ABC is to replace the likelihood with a scalar-valued sampling objective that compares model and data. In CES, the approximation comes from learning the parameter-to-data map, then following this it calculates an explicit likelihood and uses exact sampling via MCMC. Some ABC algorithms also make use of statistical emulators to further accelerate sampling (GpABC). ABC encounters challenges due to the subjective selection of summary statistics and distance metrics, as well as the risk of approximation errors, particularly in high-dimensional settings [@Sisson:2018]. CES addresses these issues by employing direct sampling using an emulator, although is restricted to an explicit Gaussian likelihood, unlike in ABC.
Accelerated uncertainty quantification tools also exist for the related approach of Approximate Bayesian Computation (ABC), e.g., GpABC [@Tankhilevich:2020] or [ApproxBayes.jl](https://github.com/marcjwilliams1/ApproxBayes.jl?tab=readme-ov-file); these tools both approximately sample from the posterior distribution. In ABC, this approximation comes from bypassing the likelihood that is usually required in sampling methods, such as MCMC. Instead, the goal of ABC is to replace the likelihood with a scalar-valued sampling objective that compares model and data. In CES, the approximation comes from learning the parameter-to-data map, then following this it calculates an explicit likelihood and uses exact sampling via MCMC. Some ABC algorithms also make use of statistical emulators to further accelerate sampling (GpABC). Although flexible, ABC encounters challenges due to the subjectivity of summary statistics and distance metrics, that may lead to approximation errors particularly in high-dimensional settings [@Nott:2018]. CES is more restrictive due to use of an explicit Gaussian likelihood, but also leverages this structure to deal with high dimensional data.

Several other tools are available in other languages for a purpose of accelerated learning of the posterior distribution or posterior sampling. Two such examples, written in python, approximate the log-posterior distribution directly with a Gaussian process: [PyVBMC](https://github.com/acerbilab/pyvbmc) [@Huggins:2023] additionaly uses variational approximations to calculate the normalization constant, and [GPry](https://github.com/jonaselgammal/GPry) [@Gammal:2022], which iteratively trains the GP with an active training point selection algorithm. Such algorithms are distinct from CES, which approximates the parameter-to-data map with the Gaussian process, and advocates ensemble Kalman methods to select training points.

Expand Down

0 comments on commit ff7990c

Please sign in to comment.