Skip to content

Commit 43e0ce8

Browse files
committed
taking upstream, squashing merge
2 parents 92bc3de + e1afc9a commit 43e0ce8

File tree

108 files changed

+11141
-8920
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

108 files changed

+11141
-8920
lines changed

.github/workflows/docs.yml

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -27,8 +27,6 @@ jobs:
2727
with:
2828
python-version: "3.13"
2929
- name: Install dependencies
30-
env:
31-
ALLOW_LATEST_GPYTORCH_LINOP: true
3230
run: |
3331
uv pip install git+https://github.com/cornellius-gp/linear_operator.git
3432
uv pip install git+https://github.com/cornellius-gp/gpytorch.git

.github/workflows/nightly.yml

Lines changed: 0 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -34,8 +34,6 @@ jobs:
3434
with:
3535
python-version: "3.13"
3636
- name: Install dependencies
37-
env:
38-
ALLOW_LATEST_GPYTORCH_LINOP: true
3937
run: |
4038
uv pip install --pre torch --index-url https://download.pytorch.org/whl/nightly/cpu
4139
uv pip install git+https://github.com/cornellius-gp/linear_operator.git
@@ -49,13 +47,9 @@ jobs:
4947
no_local_version=$(python -m setuptools_scm | cut -d "+" -f 1)
5048
echo "SETUPTOOLS_SCM_PRETEND_VERSION=${no_local_version}" >> $GITHUB_ENV
5149
- name: Build packages (wheel and source distribution)
52-
env:
53-
ALLOW_LATEST_GPYTORCH_LINOP: true
5450
run: |
5551
python -m build --sdist --wheel
5652
- name: Verify packages
57-
env:
58-
ALLOW_LATEST_GPYTORCH_LINOP: true
5953
run: |
6054
./scripts/verify_py_packages.sh
6155
- name: Deploy to Test PyPI

.github/workflows/publish_website.yml

Lines changed: 2 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@ jobs:
3737
steps:
3838
- uses: actions/checkout@v4
3939
with:
40-
ref: 'docusaurus-versions' # release branch
40+
ref: 'gh-pages' # release branch
4141
fetch-depth: 0
4242
- name: Install uv
4343
uses: astral-sh/setup-uv@v5
@@ -57,8 +57,6 @@ jobs:
5757
uv pip install git+https://github.com/cornellius-gp/linear_operator.git
5858
uv pip install git+https://github.com/cornellius-gp/gpytorch.git
5959
- name: Install dependencies
60-
env:
61-
ALLOW_LATEST_GPYTORCH_LINOP: true
6260
run: |
6361
uv pip install ."[dev, tutorials]"
6462
- if: ${{ inputs.new_version }}
@@ -92,7 +90,7 @@ jobs:
9290
9391
git add versioned_docs/ versioned_sidebars/ versions.json
9492
git commit -m "Create version ${{ inputs.new_version }} of site in Docusaurus"
95-
git push --force origin HEAD:docusaurus-versions
93+
git push --force origin HEAD:gh-pages
9694
- name: Build website
9795
run: |
9896
bash scripts/build_docs.sh -b

.github/workflows/reusable_test.yml

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -38,8 +38,6 @@ jobs:
3838
uv pip install .[test]
3939
- name: Install dependencies with latest PyTorch & GPyTorch
4040
if: ${{ inputs.use_latest_pytorch_gpytorch }}
41-
env:
42-
ALLOW_LATEST_GPYTORCH_LINOP: true
4341
run: |
4442
uv pip install --pre torch --index-url https://download.pytorch.org/whl/nightly/cpu
4543
uv pip install git+https://github.com/cornellius-gp/linear_operator.git

.github/workflows/reusable_tutorials.yml

Lines changed: 4 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -47,16 +47,13 @@ jobs:
4747
- if: ${{ inputs.use_stable_pytorch_gpytorch }}
4848
name: Install min required PyTorch, GPyTorch, and linear_operator
4949
run: |
50-
python setup.py egg_info
51-
req_txt="botorch.egg-info/requires.txt"
52-
min_torch_version=$(grep '\btorch[>=]=' ${req_txt} | sed 's/[^0-9.]//g')
53-
min_gpytorch_version=$(grep '\bgpytorch[>=]=' ${req_txt} | sed 's/[^0-9.]//g')
54-
min_linear_operator_version=$(grep '\blinear_operator[>=]=' ${req_txt} | sed 's/[^0-9.]//g')
50+
# Extract minimum versions from pyproject.toml
51+
min_torch_version=$(grep '"torch>=' pyproject.toml | sed 's/.*"torch>=\([^"]*\)".*/\1/')
52+
min_gpytorch_version=$(grep '"gpytorch>=' pyproject.toml | sed 's/.*"gpytorch>=\([^"]*\)".*/\1/')
53+
min_linear_operator_version=$(grep '"linear_operator>=' pyproject.toml | sed 's/.*"linear_operator>=\([^"]*\)".*/\1/')
5554
uv pip install "numpy<2" # Numpy >2.0 is not compatible with PyTorch <2.2.
5655
uv pip install "torch==${min_torch_version}" "gpytorch==${min_gpytorch_version}" "linear_operator==${min_linear_operator_version}" torchvision
5756
- name: Install BoTorch with tutorials dependencies
58-
env:
59-
ALLOW_LATEST_GPYTORCH_LINOP: true
6057
run: |
6158
uv pip install .[tutorials]
6259
- if: ${{ inputs.smoke_test }}

.github/workflows/test_stable.yml

Lines changed: 4 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -34,14 +34,12 @@ jobs:
3434
python-version: ${{ matrix.python-version }}
3535
- name: Install dependencies
3636
run: |
37-
uv pip install setuptools # Needed for next line on Python 3.13.
38-
python setup.py egg_info
39-
req_txt="botorch.egg-info/requires.txt"
40-
min_torch_version=$(grep '\btorch[>=]=' ${req_txt} | sed 's/[^0-9.]//g')
37+
# Extract minimum versions from pyproject.toml
38+
min_torch_version=$(grep '"torch>=' pyproject.toml | sed 's/.*"torch>=\([^"]*\)".*/\1/')
4139
# The earliest PyTorch version on Python 3.13 available for all OS is 2.6.0.
4240
min_torch_version="$(if ${{ matrix.python-version == '3.13' }}; then echo "2.6.0"; else echo "${min_torch_version}"; fi)"
43-
min_gpytorch_version=$(grep '\bgpytorch[>=]=' ${req_txt} | sed 's/[^0-9.]//g')
44-
min_linear_operator_version=$(grep '\blinear_operator[>=]=' ${req_txt} | sed 's/[^0-9.]//g')
41+
min_gpytorch_version=$(grep '"gpytorch>=' pyproject.toml | sed 's/.*"gpytorch>=\([^"]*\)".*/\1/')
42+
min_linear_operator_version=$(grep '"linear_operator>=' pyproject.toml | sed 's/.*"linear_operator>=\([^"]*\)".*/\1/')
4543
uv pip install "numpy<2" # Numpy >2.0 is not compatible with PyTorch <2.2.
4644
uv pip install "torch==${min_torch_version}" "gpytorch==${min_gpytorch_version}" "linear_operator==${min_linear_operator_version}"
4745
uv pip install .[test]

CHANGELOG.md

Lines changed: 56 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,62 @@
22

33
The release log for BoTorch.
44

5+
## [0.15.1] -- Aug 12, 2025
6+
This is a compatibility release, coming only one week after 0.15.0.
7+
8+
#### New features
9+
* Enable optimizing a sequence of acquisition functions in `optimize_acqf` (#2931).
10+
11+
## [0.15.0] -- Aug 5, 2025
12+
13+
#### New Features
14+
* NP Regression Model w/ LIG Acquisition (#2683).
15+
* Fully Bayesian Matern GP with dimension scaling prior (#2855).
16+
* Option for input warping in non-linear fully Bayesian GPs (#2858).
17+
* Support for `condition_on_observations` in `FullyBayesianMultiTaskGP` (#2871).
18+
* Improvements to `optimize_acqf_mixed_alternating`:
19+
* Support categoricals in alternating optimization (#2866).
20+
* Batch mixed optimization (#2895).
21+
* Non-equidistant discrete dimensions for `optimize_acqf_mixed_alternating` (#2923).
22+
* Update syntax for categoricals in `optimize_acqf_mixed_alternating` (#2942).
23+
* Equality constraints for `optimize_acqf_mixed_alternating` (#2944).
24+
* Multi-output acquisition functions and related utilities:
25+
* Multi-Output Acquisition Functions (#2935).
26+
* Utility for greedily selecting an approximate hypervolume maximizing subset (#2936).
27+
* Update optimize with NSGA-II (#2937).
28+
* Add utility for running pymoo NSGA-II (#2868).
29+
* Batched L-BFGS-B for more efficient acquisition function optimization (#2870, #2892).
30+
* Pathwise Thompson sampling for ensemble models (#2877).
31+
* ROBOT tutorial notebook (#2883).
32+
* Add community notebooks to the botorch.org website (#2913).
33+
34+
#### Bug Fixes
35+
* Fix model paths in prior fitted networks (#2843).
36+
* Fix a bug where input transforms were not applied in fully Bayesian models in train mode (#2859).
37+
* Fix local `Y` vs global `Y_Train` in `generate_batch` function in TURBO tutorial (#2862).
38+
* Fix CUDA support for `FullyBayesianMTGP` (#2875).
39+
* Fix edge case with NaNs in `is_non_dominated` (#2925).
40+
* Normalize for correct fidelity in `qLowerBoundMaxValueEntropy` (#2930).
41+
* Bug: Botorch_community `VBLLModel` posterior doesn't work with single value tensor (#2929).
42+
* Fix variance shape bug in Riemann posterior (#2939).
43+
* Fix input constructor for `LogProbabilityOfFeasibility` (#2945).
44+
* Fix `AugmentedRosenbrock` problem and expand testing for optimizers (#2950).
45+
46+
#### Other Changes
47+
* Improved documentation for `optimize_acqf` (#2865).
48+
* Fully Bayesian Multi-Task GP cleanup (#2869).
49+
* `average_over_ensemble_models` decorator for acquisition functions (#2873).
50+
* Changes to I-BNN tutorial (#2889).
51+
* Allow batched fixed features in gen_candidates_scipy and gen_candidates_torch (#2893)
52+
* Refactor of `MultiTask` / `FullyBayesianMultiTaskGP` to use `ProductKernel` and `IndexKernel` (#2908).
53+
* Various changes to PFNs to improve Ax compatibility (#2915, #2940).
54+
* Eliminate expensive indexing in `separate_mtmvn` (#2920).
55+
* Added reset method to `StoppingCriterion` (#2927).
56+
* Simplify closure dispatch (#2947).
57+
* Add BaseTestProblem.is_minimization_problem property (#2949).
58+
* Simplify NdarrayOptimizationClosure (#2951).
59+
60+
561
## [0.14.0] -- May 6, 2025
662

763
#### Highlights

README.md

Lines changed: 5 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -56,8 +56,8 @@ Optimization simply use Ax.
5656
**Installation Requirements**
5757
- Python >= 3.10
5858
- PyTorch >= 2.0.1
59-
- gpytorch == 1.14
60-
- linear_operator == 0.6
59+
- gpytorch >= 1.14
60+
- linear_operator >= 0.6
6161
- pyro-ppl >= 1.8.4
6262
- scipy
6363
- multiple-dispatch
@@ -86,13 +86,11 @@ conda install botorch -c gpytorch -c conda-forge
8686

8787
If you would like to try our bleeding edge features (and don't mind potentially
8888
running into the occasional bug here or there), you can install the latest
89-
development version directly from GitHub. If you want to also install the
90-
current `gpytorch` and `linear_operator` development versions, you will need
91-
to ensure that the `ALLOW_LATEST_GPYTORCH_LINOP` environment variable is set:
89+
development version directly from GitHub. You may also want to install the
90+
current `gpytorch` and `linear_operator` development versions:
9291
```bash
9392
pip install --upgrade git+https://github.com/cornellius-gp/linear_operator.git
9493
pip install --upgrade git+https://github.com/cornellius-gp/gpytorch.git
95-
export ALLOW_LATEST_GPYTORCH_LINOP=true
9694
pip install --upgrade git+https://github.com/pytorch/botorch.git
9795
```
9896

@@ -117,7 +115,6 @@ pip install -e .
117115
```bash
118116
git clone https://github.com/pytorch/botorch.git
119117
cd botorch
120-
export ALLOW_BOTORCH_LATEST=true
121118
pip install -e ".[dev, tutorials]"
122119
```
123120

@@ -136,7 +133,7 @@ For more details see our [Documentation](https://botorch.org/docs/introduction)
136133
```python
137134
import torch
138135
from botorch.models import SingleTaskGP
139-
from botorch.models.transforms import Normalize, Standardize
136+
from botorch.models.transforms import Normalize
140137
from botorch.fit import fit_gpytorch_mll
141138
from gpytorch.mlls import ExactMarginalLogLikelihood
142139

@@ -150,7 +147,6 @@ For more details see our [Documentation](https://botorch.org/docs/introduction)
150147
train_X=train_X,
151148
train_Y=Y,
152149
input_transform=Normalize(d=2),
153-
outcome_transform=Standardize(m=1),
154150
)
155151
mll = ExactMarginalLogLikelihood(gp.likelihood, gp)
156152
fit_gpytorch_mll(mll)

botorch/acquisition/analytic.py

Lines changed: 2 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,6 @@
3737
from botorch.utils.safe_math import log1mexp, logmeanexp
3838
from botorch.utils.transforms import (
3939
average_over_ensemble_models,
40-
convert_to_target_pre_hook,
4140
t_batch_mode_transform,
4241
)
4342
from gpytorch.likelihoods.gaussian_likelihood import FixedNoiseGaussianLikelihood
@@ -100,7 +99,7 @@ def _mean_and_sigma(
10099
posterior. Removes the last two dimensions if they have size one. Only
101100
returns a single tensor of means if compute_sigma is True.
102101
"""
103-
self.to(device=X.device) # ensures buffers / parameters are on the same device
102+
self.to(X) # ensures buffers / parameters are on the same device and dtype
104103
posterior = self.model.posterior(
105104
X=X, posterior_transform=self.posterior_transform
106105
)
@@ -584,7 +583,6 @@ def __init__(
584583
self.objective_index = objective_index
585584
self.register_buffer("best_f", torch.as_tensor(best_f))
586585
ConstrainedAnalyticAcquisitionFunctionMixin.__init__(self, constraints)
587-
self.register_forward_pre_hook(convert_to_target_pre_hook)
588586

589587
@t_batch_mode_transform(expected_q=1)
590588
@average_over_ensemble_models
@@ -638,9 +636,7 @@ class LogProbabilityOfFeasibility(
638636
_log: bool = True
639637

640638
def __init__(
641-
self,
642-
model: Model,
643-
constraints: dict[int, tuple[float | None, float | None]],
639+
self, model: Model, constraints: dict[int, tuple[float | None, float | None]]
644640
) -> None:
645641
r"""Analytic Log Probability of Feasibility.
646642
@@ -654,7 +650,6 @@ def __init__(
654650
AcquisitionFunction.__init__(self, model=model)
655651
self.posterior_transform = None
656652
ConstrainedAnalyticAcquisitionFunctionMixin.__init__(self, constraints)
657-
self.register_forward_pre_hook(convert_to_target_pre_hook)
658653

659654
@t_batch_mode_transform(expected_q=1)
660655
@average_over_ensemble_models
@@ -730,7 +725,6 @@ def __init__(
730725
self.objective_index = objective_index
731726
self.register_buffer("best_f", torch.as_tensor(best_f))
732727
ConstrainedAnalyticAcquisitionFunctionMixin.__init__(self, constraints)
733-
self.register_forward_pre_hook(convert_to_target_pre_hook)
734728

735729
@t_batch_mode_transform(expected_q=1)
736730
@average_over_ensemble_models

botorch/acquisition/cost_aware.py

Lines changed: 18 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,6 @@
1111

1212
from __future__ import annotations
1313

14-
import warnings
1514
from abc import ABC, abstractmethod
1615
from collections.abc import Callable
1716

@@ -21,7 +20,6 @@
2120
IdentityMCObjective,
2221
MCAcquisitionObjective,
2322
)
24-
from botorch.exceptions.warnings import CostAwareWarning
2523
from botorch.models.deterministic import DeterministicModel
2624
from botorch.models.gpytorch import GPyTorchModel
2725
from botorch.sampling.base import MCSampler
@@ -112,7 +110,7 @@ def __init__(
112110
cost_model: DeterministicModel | GPyTorchModel,
113111
use_mean: bool = True,
114112
cost_objective: MCAcquisitionObjective | None = None,
115-
min_cost: float = 1e-2,
113+
log: bool = False,
116114
) -> None:
117115
r"""Cost-aware utility that weights increase in utility by inverse cost.
118116
For negative increases in utility, the utility is instead scaled by the
@@ -130,7 +128,8 @@ def __init__(
130128
un-transform predictions/samples of a cost model fit on the
131129
log-transformed cost (often done to ensure non-negativity). If the
132130
cost model is multi-output, then by default this will sum the cost
133-
across outputs.
131+
across outputs. NOTE: ``cost_objective`` must output
132+
strictly positive values; forward will raise a ``ValueError`` otherwise.
134133
min_cost: A value used to clamp the cost samples so that they are not
135134
too close to zero, which may cause numerical issues.
136135
Returns:
@@ -147,7 +146,7 @@ def __init__(
147146
self.cost_model = cost_model
148147
self.cost_objective: MCAcquisitionObjective = cost_objective
149148
self._use_mean = use_mean
150-
self._min_cost = min_cost
149+
self._log = log
151150

152151
def forward(
153152
self,
@@ -202,18 +201,21 @@ def forward(
202201
cost = none_throws(sampler)(cost_posterior)
203202
cost = self.cost_objective(cost)
204203

205-
# Ensure non-negativity of the cost
206-
if torch.any(cost < -1e-7):
207-
warnings.warn(
208-
"Encountered negative cost values in InverseCostWeightedUtility",
209-
CostAwareWarning,
210-
stacklevel=2,
204+
# Ensure that costs are positive
205+
if not torch.all(cost > 0.0):
206+
raise ValueError(
207+
"Costs must be strictly positive. Consider clamping cost_objective."
211208
)
212-
# clamp (away from zero) and sum cost across elements of the q-batch -
213-
# this will be of shape `num_fantasies x batch_shape` or `batch_shape`
214-
cost = cost.clamp_min(self._min_cost).sum(dim=-1)
209+
210+
# sum costs along q-batch
211+
cost = cost.sum(dim=-1)
215212

216213
# compute and return the ratio on the sample level - If `use_mean=True`
217214
# this operation involves broadcasting the cost across fantasies.
218-
# We multiply by the cost if the deltas are <= 0, see discussion #2914
219-
return torch.where(deltas > 0, deltas / cost, deltas * cost)
215+
if self._log:
216+
# if _log is True then input deltas are in log space
217+
# so original deltas cannot be <= 0
218+
return deltas - torch.log(cost)
219+
else:
220+
# We multiply by the cost if the deltas are <= 0, see discussion #2914
221+
return torch.where(deltas > 0, deltas / cost, deltas * cost)

0 commit comments

Comments
 (0)