Skip to content

API updates, more robust model fitting

Compare
Choose a tag to compare
@Balandat Balandat released this 28 Jun 00:35

Breaking changes

  • rename botorch.qmc to botorch.sampling, move MC samplers from
    acquisition.sampler to botorch.sampling.samplers (#172)

New Features

  • Add condition_on_observations and fantasize to the Model level API (#173)
  • Support pending observations generically for all MCAcqusitionFunctions (#176)
  • Add fidelity kernel for training iterations/training data points (#178)
  • Support for optimization constraints across q-batches (to support things like
    sample budget constraints) (2a95a6c)
  • Add ModelList <-> Batched Model converter (#187)
  • New test functions
    • basic: neg_ackley, cosine8, neg_levy, neg_rosenbrock, neg_shekel
      (e26dc75)
    • for multi-fidelity BO: neg_aug_branin, neg_aug_hartmann6,
      neg_aug_rosenbrock (ec4aca7)

Improved functionality:

  • More robust model fitting
    • Catch gpytorch numerical issues and return NaN to the optimizer (#184)
    • Restart optimization upon failure by sampling hyperparameters from their prior (#188)
    • Sequentially fit batched and ModelListGP models by default (#189)
    • Change minimum inferred noise level (e2c64fe)
  • Introduce optional batch limit in joint_optimize to increases scalability of
    parallel optimization (baab578)
  • Change constructor of ModelListGP to comply with GPyTorch’s IndependentModelList
    constructor (a6cf739)
  • Use torch.random to set default seed for samplers (rather than random) to
    making sampling reproducible when setting torch.manual_seed
    (ae507ad)

Performance Improvements

  • Use einsum in LinearMCObjective (22ca295)
  • Change default Sobol sample size for MCAquisitionFunctions to be base-2 for
    better MC integration performance (5d8e818)
  • Add ability to fit models in SumMarginalLogLikelihood sequentially (and make
    that the default setting) (#183)
  • Do not construct the full covariance matrix when computing posterior of
    single-output BatchedMultiOutputGPyTorchModel (#185)

Bug fixes

  • Properly handle observation_noise kwarg for BatchedMultiOutputGPyTorchModels (#182)
  • Fix a issue where f_best was always max for NoisyExpectedImprovement
    (410de58)
  • Fix bug and numerical issues in initialize_q_batch
    (844dcd1)
  • Fix numerical issues with inv_transform for qMC sampling (#162)

Other

  • Bump GPyTorch minimum requirement to 0.3.3