You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Update most models to use dimension-scaled log-normal hyperparameter priors by
default, which makes performance much more robust to dimensionality. See
discussion #2451 for details. The only models that are not changed are those
for fully Bayesian models and PairwiseGP; for models that utilize a
composite kernel, such as multi-fidelity/task/context, this change only
affects the base kernel (#2449, #2450, #2507).
Use Standarize by default in all the models using the upgraded priors. In
addition to reducing the amount of boilerplate needed to initialize a model,
this change was motivated by the change to default priors, because the new
priors will work less well when data is not standardized. Users who do not
want to use transforms should explicitly pass in None (#2458, #2532).
Add input constructor for qMultiFidelityHypervolumeKnowledgeGradient (#2524).
Add posterior_transform to ApproximateGPyTorchModel.posterior (#2531).
Bug fixes
Fix batch_shape default in OrthogonalAdditiveKernel (#2473).
Ensure all tensors are on CPU in HitAndRunPolytopeSampler (#2502).
Fix duplicate logging in generation/gen.py (#2504).
Raise exception if X_pending is set on the underlying AcquisitionFunction
in prior-guided AcquisitionFunction (#2505).
Make affine input transforms error with data of incorrect dimension, even in
eval mode (#2510).
Use fidelity-aware current_value in input constructor for qMultiFidelityKnowledgeGradient (#2519).
Apply input transforms when computing MLL in model closures (#2527).
Detach fval in torch_minimize to remove an opportunity for memory leaks
(#2529).
Documentation
Clarify incompatibility of inter-point constraints with get_polytope_samples
(#2469).
Update tutorials to use the log variants of EI-family acquisition functions,
don't make tutorials pass Standardize unnecessarily, and other
simplifications and cleanup (#2462, #2463, #2490, #2495, #2496, #2498, #2499).