-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add "fit only pytorch models" flag #291
Comments
Just adding here, that when enabling this function, we should deactivate data pre-processing as this is done through sci-kit learn |
A bit of a brain dump here so that we can discuss this: I've just had a look at this issue and #295 (i.e. running and extracting PyTorch models). It's not as straightforward as I thought. This is because both Neural Processes and GPs need objects/data outside of the PyTorch object itself and both are quite specific.
So the question is how to go ahead. A few thoughts:
Would be great to get your input here @marjanfamili @radka-j |
@mastoffel can you point me to where in the code autoemulate does these additional steps (I just don't know the codebase very well yet)? |
So to do a proper |
Thanks, this is really helpful! I think we have to return all as a single torch object that has the data it needs and a |
Allow users to easily select to only fit emulators that have a PyTorch backend (currently this is GPs and CNPs). This is useful in cases where a downstream task relies on this.
The text was updated successfully, but these errors were encountered: