Skip to content

Releases: pytorch/botorch

Max-value entropy search, multi-fidelity (cost-aware) optimization

20 Dec 21:57
Compare
Choose a tag to compare

This release adds the popular Max-value Entropy Search (MES) acquisition function, as well as support for multi-fidelity Bayesian optimization via both the Knowledge Gradient (KG) and MES.

Compatibility

  • Require PyTorch >=1.3.1 (#313).
  • Require GPyTorch >=1.0 (#342).

New Features

  • Add cost-aware KnowledgeGradient (qMultiFidelityKnowledgeGradient) for multi-fidelity optimization (#292).
  • Add qMaxValueEntropy and qMultiFidelityMaxValueEntropy max-value entropy search acquisition functions (#298).
  • Add subset_output functionality to (most) models (#324).
  • Add outcome transforms and input transforms (#321).
  • Add outcome_transform kwarg to model constructors for automatic outcome transformation and un-transformation (#327).
  • Add cost-aware utilities for cost-sensitive acquisiiton functions (#289).
  • Add DeterminsticModel and DetermisticPosterior abstractions (#288).
  • Add AffineFidelityCostModel (f838eac).
  • Add project_to_target_fidelity and expand_trace_observations utilities for use in multi-fidelity optimization (1ca12ac).

Performance Improvements

  • New prune_baseline option for pruning X_baseline in qNoisyExpectedImprovement (#287).
  • Do not use approximate MLL computation for deterministic fitting (#314).
  • Avoid re-evaluating the acquisition function in gen_candidates_torch (#319).
  • Use CPU where possible in gen_batch_initial_conditions to avoid memory issues on the GPU (#323).

Bug fixes

  • Properly register NoiseModelAddedLossTerm in HeteroskedasticSingleTaskGP (671c93a).
  • Fix batch mode for MultiTaskGPyTorchModel (#316).
  • Honor propagate_grads argument in fantasize of FixedNoiseGP (#303).
  • Properly handle diag arg in LinearTruncatedFidelityKernel (#320).

Other changes

  • Consolidate and simplify multi-fidelity models (#308).
  • New license header style (#309).
  • Validate shape of best_f in qExpectedImprovement (#299).
  • Support specifying observation noise explicitly for all models (#256).
  • Add num_outputs property to the Model API (#330).
  • Validate output shape of models upon instantiating acquisition functions (#331).

Tests

  • Silence warnings outside of explicit tests (#290).
  • Enforce full sphinx docs coverage in CI (#294).

Knowledge Gradient (one-shot), various maintenance

02 Oct 00:59
Compare
Choose a tag to compare

Breaking Changes

  • Require explicit output dimensions in BoTorch models (#238)
  • Make joint_optimize / sequential_optimize return acquisition function
    values (#149) [note deprecation notice below]
  • standardize now works on the second to last dimension (#263)
  • Refactor synthetic test functions (#273)

New Features

  • Add qKnowledgeGradient acquisition function (#272, #276)
  • Add input scaling check to standard models (#267)
  • Add cyclic_optimize, convergence criterion class (#269)
  • Add settings.debug context manager (#242)

Deprecations

  • Consolidate sequential_optimize and joint_optimize into optimize_acqf
    (#150)

Bug fixes

  • Properly pass noise levels to GPs using a FixedNoiseGaussianLikelihood (#241)
    [requires gpytorch > 0.3.5]
  • Fix q-batch dimension issue in ConstrainedExpectedImprovement
    (6c06718)
  • Fix parameter constraint issues on GPU (#260)

Minor changes

  • Add decorator for concatenating pending points (#240)
  • Draw independent sample from prior for each hyperparameter (#244)
  • Allow dim > 1111 for gen_batch_initial_conditions (#249)
  • Allow optimize_acqf to use q>1 for AnalyticAcquisitionFunction (#257)
  • Allow excluding parameters in fit functions (#259)
  • Track the final iteration objective value in fit_gpytorch_scipy (#258)
  • Error out on unexpected dims in parameter constraint generation (#270)
  • Compute acquisition values in gen_ functions w/o grad (#274)

Tests

  • Introduce BotorchTestCase to simplify test code (#243)
  • Refactor tests to have monolithic cuda tests (#261)

Compatibility and Maintenance release

10 Aug 23:27
Compare
Choose a tag to compare

Compatibility

  • Updates to support breaking changes in PyTorch to boolean masks and tensor
    comparisons (#224).
  • Require PyTorch >=1.2 (#225).
  • Require GPyTorch >=0.3.5 (itself a compatibility release).

New Features

  • Add FixedFeatureAcquisitionFunction wrapper that simplifies optimizing
    acquisition functions over a subset of input features (#219).
  • Add ScalarizedObjective for scalarizing posteriors (#210).
  • Change default optimization behavior to use L-BFGS-B by for box constraints
    (#207).

Bug fixes

  • Add validation to candidate generation (#213), making sure constraints are
    strictly satisfied (rater than just up to numerical accuracy of the optimizer).

Minor changes

  • Introduce AcquisitionObjective base class (#220).
  • Add propagate_grads context manager, replacing the propagate_grads kwarg in
    model posterior() calls (#221)
  • Add batch_initial_conditions argument to joint_optimize() for
    warm-starting the optimization (ec3365a).
  • Add return_best_only argument to joint_optimize() (#216). Useful for
    implementing advanced warm-starting procedures.

Maintenance release

10 Jul 03:17
Compare
Choose a tag to compare

Bug fixes

  • Avoid PyTorch bug resulting in bad gradients on GPU by requiring GPyTorch >= 0.3.4
  • Fixes to resampling behavior in MCSamplers (#204)

Experimental Features

  • Linear truncated kernel for multi-fidelity bayesian optimization (#192)
  • SingleTaskMultiFidelityGP for GP models that have fidelity parameters (#181)

API updates, more robust model fitting

28 Jun 00:35
Compare
Choose a tag to compare

Breaking changes

  • rename botorch.qmc to botorch.sampling, move MC samplers from
    acquisition.sampler to botorch.sampling.samplers (#172)

New Features

  • Add condition_on_observations and fantasize to the Model level API (#173)
  • Support pending observations generically for all MCAcqusitionFunctions (#176)
  • Add fidelity kernel for training iterations/training data points (#178)
  • Support for optimization constraints across q-batches (to support things like
    sample budget constraints) (2a95a6c)
  • Add ModelList <-> Batched Model converter (#187)
  • New test functions
    • basic: neg_ackley, cosine8, neg_levy, neg_rosenbrock, neg_shekel
      (e26dc75)
    • for multi-fidelity BO: neg_aug_branin, neg_aug_hartmann6,
      neg_aug_rosenbrock (ec4aca7)

Improved functionality:

  • More robust model fitting
    • Catch gpytorch numerical issues and return NaN to the optimizer (#184)
    • Restart optimization upon failure by sampling hyperparameters from their prior (#188)
    • Sequentially fit batched and ModelListGP models by default (#189)
    • Change minimum inferred noise level (e2c64fe)
  • Introduce optional batch limit in joint_optimize to increases scalability of
    parallel optimization (baab578)
  • Change constructor of ModelListGP to comply with GPyTorch’s IndependentModelList
    constructor (a6cf739)
  • Use torch.random to set default seed for samplers (rather than random) to
    making sampling reproducible when setting torch.manual_seed
    (ae507ad)

Performance Improvements

  • Use einsum in LinearMCObjective (22ca295)
  • Change default Sobol sample size for MCAquisitionFunctions to be base-2 for
    better MC integration performance (5d8e818)
  • Add ability to fit models in SumMarginalLogLikelihood sequentially (and make
    that the default setting) (#183)
  • Do not construct the full covariance matrix when computing posterior of
    single-output BatchedMultiOutputGPyTorchModel (#185)

Bug fixes

  • Properly handle observation_noise kwarg for BatchedMultiOutputGPyTorchModels (#182)
  • Fix a issue where f_best was always max for NoisyExpectedImprovement
    (410de58)
  • Fix bug and numerical issues in initialize_q_batch
    (844dcd1)
  • Fix numerical issues with inv_transform for qMC sampling (#162)

Other

  • Bump GPyTorch minimum requirement to 0.3.3

Initial beta release

01 May 01:31
Compare
Choose a tag to compare
Initial beta release Pre-release
Pre-release

First public release.