-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable single point precision via env vars #226
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @Scienfitz, thanks for implementing. I think the changes are completely fine 👍🏼 However, we should perhaps discuss your statements from the PR description:
- Agree with statement about failing tests due to numerical instability
- Don't fully agree with statement about user input. I think the choice of the float-type should be entirely a backend thing and should not require changing inputs by the user (which is annoying + requires us to write explanations in the user guide + requires people to find these explanations). But I also don't think that a proper implementation is complicated / requires much effort. The continuous constraint is a perfect example: Instead of converting the user input (which would then cause potential problems with roundtrip serialization) I think the conversion simply needs to happen in the corresponding compute calls, i.e. in this case in
to_botorch
, where the coefficients are already correctly converted. That means, adding a simpletorch.tensor(self.rhs, dtype=DTypeFloatTorch
should already suffice!? The only place I see where a "delayed" conversion is not possible is in the attributes of the discrete search space. But there we could in principle set appropriate eq-checks into place, if needed in the future. - Because of the above, I think we can omit float32 tests for now but if we notice the first problems, we still add them later.
@AdrianSosic |
b1de4ad
to
221b9f1
Compare
221b9f1
to
cc45fa4
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great, thanks for taking the time to dive into the botorch issue. Send me the link to the issue once it's online so I can subscribe 👍🏼
A small user guide mentioning userfacing environment variables based on #226 rendered version on fork: https://scienfitz.github.io/baybe-dev/userguide/envvars.html
Since floating point precision can be controlled via env vars (#226) various problems have surfaced letting tests fail in single precision. This PR fixes those. They were mostly related to the way `values` and `comp_df` were created for parameters, `selection` was treated in `SubSelectionCondition` and a `lookup` in a different float precision being used in a simulation. The only remaining issues with test in single precision are numerical instabilities (out of scope)
Added
Addressing suggestion from #223
Notes:
st.floats(...,width=32)
if single precision is desired --> not done under the assumption we dont want tests enabled for single precision, would require flexible checking everywhererhs
andcoefficients
are defined as python floats, hence 64 bit. This means using double precision with env vars also implies providing explicit 32bit floats in such fields (likecoefficients
orrhs
), otherwise it wont work. This is not something I'd fix but perhaps mention in the userguide