Skip to content

Hypervolume Knowledge Gradient (HVKG)

Compare
Choose a tag to compare
@Balandat Balandat released this 09 Dec 01:58
· 315 commits to main since this release

New features

Hypervolume Knowledge Gradient (HVKG):

  • Add qHypervolumeKnowledgeGradient, which seeks to maximize the difference in hypervolume of the hypervolume-maximizing set of a fixed size after conditioning the unknown observation(s) that would be received if X were evaluated (#1950, #1982, #2101).
  • Add tutorial on decoupled Multi-Objective Bayesian Optimization (MOBO) with HVKG (#2094).

Other new features:

  • Add MultiOutputFixedCostModel, which is useful for decoupled scenarios where the objectives have different costs (#2093).
  • Enable q > 1 in acquisition function optimization when nonlinear constraints are present (#1793).
  • Support different noise levels for different outputs in test functions (#2136).

Bug fixes

  • Fix fantasization with a FixedNoiseGaussianLikelihood when noise is known and X is empty (#2090).
  • Make LearnedObjective compatible with constraints in acquisition functions regardless of sample_shape (#2111).
  • Make input constructors for qExpectedImprovement, qLogExpectedImprovement, and qProbabilityOfImprovement compatible with LearnedObjective regardless of sample_shape (#2115).
  • Fix handling of constraints in qSimpleRegret (#2141).

Other changes

  • Increase default sample size for LearnedObjective (#2095).
  • Allow passing in X with or without fidelity dimensions in project_to_target_fidelity (#2102).
  • Use full-rank task covariance matrix by default in SAAS MTGP (#2104).
  • Rename FullyBayesianPosterior to GaussianMixturePosterior; add _is_ensemble and _is_fully_bayesian attributes to Model (#2108).
  • Various improvements to tutorials including speedups, improved explanations, and compatibility with newer versions of libraries.