Skip to content

machinelearningnuremberg/FSBO

Repository files navigation

FSBO

Few Shot Bayesian Optimization

This Repo contains the implementation of FSBO applied to the HPO-B benchmark. Please download the data from the benchmark in order to reproduce results.

Folder Structure:

  • ./ : Main code base
  • checkpoints : Meta-trained models per search space with the default configuration from the code base.
  • results : Results after meta-testing the pretrained models
  • benchmark_results : JSON files for plotting
  • plots : Results plots

Requirements

  • Gpytorch 1.4.2
  • Pytorch 1.8.1
  • Numpy 1.20.3

Usage

  • Train on search space:

python fsbo_metatrain.py --space_id 6767

  • Test on search space and dataset:

python fsbo_test.py --space_id 6767 --dataset_id 31

  • Aggregate results in a JSON and plot results:

python generate_json.py

python fsbo_benchmark_plot.py

Results in Discrete Space

FSBO is the model version used in the HPO-B paper, FSBO2 is the model version used in this refactored code-base. Results

Results in Continuous Space

Results

Cite us

This code is a refactoring of the original code used in HPO-B paper. If you use this code, please cite us:


@article{pineda2021hpob,
  author    = {Sebastian Pineda{-}Arango and
               Hadi S. Jomaa and
               Martin Wistuba and
               Josif Grabocka},
  title     = {{HPO-B:} {A} Large-Scale Reproducible Benchmark for Black-Box {HPO}
               based on OpenML},
  journal   = {Neural Information Processing Systems (NeurIPS) Track on Datasets and Benchmarks},
  year      = {2021}
}

About

[ICLR 2021] Few Shot Bayesian Optimization

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages