Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable single point precision via env vars #226

Merged
merged 4 commits into from
May 16, 2024
Merged

Conversation

Scienfitz
Copy link
Collaborator

@Scienfitz Scienfitz commented May 2, 2024

Added

  • env var for torch
  • env var for numpy
  • these are booleans that enforce usage of single precision if True, False by default

Addressing suggestion from #223

Notes:

  • Did some very light testing via fulltest and setting the tox env vars to use single precision there
  • some tests expectedly fail due to numerical instability -> no action required
  • some tests fail because hypothesis generate incompatible floats --> likely fixed by using st.floats(...,width=32) if single precision is desired --> not done under the assumption we dont want tests enabled for single precision, would require flexible checking everywhere
  • some tests fail as constraints complain about combining Single and Double precision --> likely because rhs and coefficients are defined as python floats, hence 64 bit. This means using double precision with env vars also implies providing explicit 32bit floats in such fields (like coefficients or rhs), otherwise it wont work. This is not something I'd fix but perhaps mention in the userguide

@Scienfitz Scienfitz self-assigned this May 2, 2024
@Scienfitz Scienfitz added the new feature New functionality label May 2, 2024
Copy link
Collaborator

@AdrianSosic AdrianSosic left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @Scienfitz, thanks for implementing. I think the changes are completely fine 👍🏼 However, we should perhaps discuss your statements from the PR description:

  • Agree with statement about failing tests due to numerical instability
  • Don't fully agree with statement about user input. I think the choice of the float-type should be entirely a backend thing and should not require changing inputs by the user (which is annoying + requires us to write explanations in the user guide + requires people to find these explanations). But I also don't think that a proper implementation is complicated / requires much effort. The continuous constraint is a perfect example: Instead of converting the user input (which would then cause potential problems with roundtrip serialization) I think the conversion simply needs to happen in the corresponding compute calls, i.e. in this case in to_botorch, where the coefficients are already correctly converted. That means, adding a simple torch.tensor(self.rhs, dtype=DTypeFloatTorch should already suffice!? The only place I see where a "delayed" conversion is not possible is in the attributes of the discrete search space. But there we could in principle set appropriate eq-checks into place, if needed in the future.
  • Because of the above, I think we can omit float32 tests for now but if we notice the first problems, we still add them later.

@Scienfitz
Copy link
Collaborator Author

@AdrianSosic
please make a thread and not a PR comment for discussion matters

CHANGELOG.md Outdated Show resolved Hide resolved
@Scienfitz Scienfitz force-pushed the feature/precision_envvar branch 2 times, most recently from b1de4ad to 221b9f1 Compare May 13, 2024 18:40
@Scienfitz Scienfitz force-pushed the feature/precision_envvar branch from 221b9f1 to cc45fa4 Compare May 16, 2024 09:42
Copy link
Collaborator

@AdrianSosic AdrianSosic left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great, thanks for taking the time to dive into the botorch issue. Send me the link to the issue once it's online so I can subscribe 👍🏼

@Scienfitz Scienfitz merged commit cbc1b0c into main May 16, 2024
10 checks passed
@Scienfitz Scienfitz deleted the feature/precision_envvar branch May 16, 2024 11:34
AdrianSosic added a commit that referenced this pull request May 21, 2024
A small user guide mentioning userfacing environment variables
based on #226 

rendered version on fork:
https://scienfitz.github.io/baybe-dev/userguide/envvars.html
Scienfitz added a commit that referenced this pull request Jun 4, 2024
Since floating point precision can be controlled via env vars (#226)
various problems have surfaced letting tests fail in single precision.
This PR fixes those. They were mostly related to the way `values` and
`comp_df` were created for parameters, `selection` was treated in
`SubSelectionCondition` and a `lookup` in a different float precision
being used in a simulation.

The only remaining issues with test in single precision are numerical
instabilities (out of scope)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
new feature New functionality
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants