Releases: pytorch/botorch
Releases · pytorch/botorch
Max-value entropy search, multi-fidelity (cost-aware) optimization
This release adds the popular Max-value Entropy Search (MES) acquisition function, as well as support for multi-fidelity Bayesian optimization via both the Knowledge Gradient (KG) and MES.
Compatibility
New Features
- Add cost-aware KnowledgeGradient (
qMultiFidelityKnowledgeGradient
) for multi-fidelity optimization (#292). - Add
qMaxValueEntropy
andqMultiFidelityMaxValueEntropy
max-value entropy search acquisition functions (#298). - Add
subset_output
functionality to (most) models (#324). - Add outcome transforms and input transforms (#321).
- Add
outcome_transform
kwarg to model constructors for automatic outcome transformation and un-transformation (#327). - Add cost-aware utilities for cost-sensitive acquisiiton functions (#289).
- Add
DeterminsticModel
andDetermisticPosterior
abstractions (#288). - Add
AffineFidelityCostModel
(f838eac). - Add
project_to_target_fidelity
andexpand_trace_observations
utilities for use in multi-fidelity optimization (1ca12ac).
Performance Improvements
- New
prune_baseline
option for pruningX_baseline
inqNoisyExpectedImprovement
(#287). - Do not use approximate MLL computation for deterministic fitting (#314).
- Avoid re-evaluating the acquisition function in
gen_candidates_torch
(#319). - Use CPU where possible in
gen_batch_initial_conditions
to avoid memory issues on the GPU (#323).
Bug fixes
- Properly register
NoiseModelAddedLossTerm
inHeteroskedasticSingleTaskGP
(671c93a). - Fix batch mode for
MultiTaskGPyTorchModel
(#316). - Honor
propagate_grads
argument infantasize
ofFixedNoiseGP
(#303). - Properly handle
diag
arg inLinearTruncatedFidelityKernel
(#320).
Other changes
- Consolidate and simplify multi-fidelity models (#308).
- New license header style (#309).
- Validate shape of
best_f
inqExpectedImprovement
(#299). - Support specifying observation noise explicitly for all models (#256).
- Add
num_outputs
property to theModel
API (#330). - Validate output shape of models upon instantiating acquisition functions (#331).
Tests
Knowledge Gradient (one-shot), various maintenance
Breaking Changes
- Require explicit output dimensions in BoTorch models (#238)
- Make
joint_optimize
/sequential_optimize
return acquisition function
values (#149) [note deprecation notice below] standardize
now works on the second to last dimension (#263)- Refactor synthetic test functions (#273)
New Features
- Add
qKnowledgeGradient
acquisition function (#272, #276) - Add input scaling check to standard models (#267)
- Add
cyclic_optimize
, convergence criterion class (#269) - Add
settings.debug
context manager (#242)
Deprecations
- Consolidate
sequential_optimize
andjoint_optimize
intooptimize_acqf
(#150)
Bug fixes
- Properly pass noise levels to GPs using a
FixedNoiseGaussianLikelihood
(#241)
[requires gpytorch > 0.3.5] - Fix q-batch dimension issue in
ConstrainedExpectedImprovement
(6c06718) - Fix parameter constraint issues on GPU (#260)
Minor changes
- Add decorator for concatenating pending points (#240)
- Draw independent sample from prior for each hyperparameter (#244)
- Allow
dim > 1111
forgen_batch_initial_conditions
(#249) - Allow
optimize_acqf
to useq>1
forAnalyticAcquisitionFunction
(#257) - Allow excluding parameters in fit functions (#259)
- Track the final iteration objective value in
fit_gpytorch_scipy
(#258) - Error out on unexpected dims in parameter constraint generation (#270)
- Compute acquisition values in gen_ functions w/o grad (#274)
Tests
Compatibility and Maintenance release
Compatibility
- Updates to support breaking changes in PyTorch to boolean masks and tensor
comparisons (#224). - Require PyTorch >=1.2 (#225).
- Require GPyTorch >=0.3.5 (itself a compatibility release).
New Features
- Add
FixedFeatureAcquisitionFunction
wrapper that simplifies optimizing
acquisition functions over a subset of input features (#219). - Add
ScalarizedObjective
for scalarizing posteriors (#210). - Change default optimization behavior to use L-BFGS-B by for box constraints
(#207).
Bug fixes
- Add validation to candidate generation (#213), making sure constraints are
strictly satisfied (rater than just up to numerical accuracy of the optimizer).
Minor changes
- Introduce
AcquisitionObjective
base class (#220). - Add propagate_grads context manager, replacing the
propagate_grads
kwarg in
modelposterior()
calls (#221) - Add
batch_initial_conditions
argument tojoint_optimize()
for
warm-starting the optimization (ec3365a). - Add
return_best_only
argument tojoint_optimize()
(#216). Useful for
implementing advanced warm-starting procedures.
Maintenance release
Bug fixes
- Avoid PyTorch bug resulting in bad gradients on GPU by requiring GPyTorch >= 0.3.4
- Fixes to resampling behavior in MCSamplers (#204)
Experimental Features
API updates, more robust model fitting
Breaking changes
- rename
botorch.qmc
tobotorch.sampling
, move MC samplers from
acquisition.sampler
tobotorch.sampling.samplers
(#172)
New Features
- Add
condition_on_observations
andfantasize
to the Model level API (#173) - Support pending observations generically for all
MCAcqusitionFunctions
(#176) - Add fidelity kernel for training iterations/training data points (#178)
- Support for optimization constraints across
q
-batches (to support things like
sample budget constraints) (2a95a6c) - Add ModelList <-> Batched Model converter (#187)
- New test functions
Improved functionality:
- More robust model fitting
- Introduce optional batch limit in
joint_optimize
to increases scalability of
parallel optimization (baab578) - Change constructor of
ModelListGP
to comply with GPyTorch’sIndependentModelList
constructor (a6cf739) - Use
torch.random
to set default seed for samplers (rather thanrandom
) to
making sampling reproducible when settingtorch.manual_seed
(ae507ad)
Performance Improvements
- Use
einsum
inLinearMCObjective
(22ca295) - Change default Sobol sample size for
MCAquisitionFunctions
to be base-2 for
better MC integration performance (5d8e818) - Add ability to fit models in
SumMarginalLogLikelihood
sequentially (and make
that the default setting) (#183) - Do not construct the full covariance matrix when computing posterior of
single-output BatchedMultiOutputGPyTorchModel (#185)
Bug fixes
- Properly handle observation_noise kwarg for BatchedMultiOutputGPyTorchModels (#182)
- Fix a issue where
f_best
was always max for NoisyExpectedImprovement
(410de58) - Fix bug and numerical issues in
initialize_q_batch
(844dcd1) - Fix numerical issues with
inv_transform
for qMC sampling (#162)
Other
- Bump GPyTorch minimum requirement to 0.3.3
Initial beta release
First public release.