Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dedicated inference GPU #16

Open
braceal opened this issue Feb 29, 2024 · 1 comment
Open

Dedicated inference GPU #16

braceal opened this issue Feb 29, 2024 · 1 comment
Labels
enhancement New feature or request

Comments

@braceal
Copy link
Contributor

braceal commented Feb 29, 2024

When simulations are very fast (or run for short time scales), they can generate data faster than the training and inference can keep up. In this case, it may be useful to have an option that dedicates a GPU to inference so that we can always use the latest data.

@braceal braceal added the enhancement New feature or request label Feb 29, 2024
@braceal
Copy link
Contributor Author

braceal commented Mar 2, 2024

If we add configurable parameters that specify the number of training workers and inference workers then we can scale up more efficiently by running multiple training jobs concurrently and always using the latest set of weights for inference.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant