You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Nov 3, 2023. It is now read-only.
The nightly gpu tests still (mostly) use only one gpu. We have full support for multiprocessing eval, and we should be taking advantage of it.
My suggestion is to add a "use_multiprocessing" argument to testing_utils.eval_model and have it toggle between the two scripts appropriately
We probably will see a 25% speedup in the long gpu tests. Measure the improvement locally (which should be about 2x), report here, and push to report improvements in real CI.
Note this is pretty much only useful for generative models, but there are a large number of those: at least the unlikelihood, and dodeca models. Perhaps Eric's style tests. Possibly the controllable models too.
The text was updated successfully, but these errors were encountered:
This issue has not had activity in 30 days. Please feel free to reopen if you have more issues. You may apply the "never-stale" tag to prevent this from happening.
The nightly gpu tests still (mostly) use only one gpu. We have full support for multiprocessing eval, and we should be taking advantage of it.
My suggestion is to add a "use_multiprocessing" argument to
testing_utils.eval_model
and have it toggle between the two scripts appropriatelyWe probably will see a 25% speedup in the long gpu tests. Measure the improvement locally (which should be about 2x), report here, and push to report improvements in real CI.
Note this is pretty much only useful for generative models, but there are a large number of those: at least the unlikelihood, and dodeca models. Perhaps Eric's style tests. Possibly the controllable models too.
The text was updated successfully, but these errors were encountered: