-
Notifications
You must be signed in to change notification settings - Fork 8.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Mixed precision training? #487
Comments
Mozilla TTS tried it and encountered a bug with RNNs to be fixed in a future Pytorch release. mozilla/TTS#486 (comment) |
Pytorch 1.7 just released, so it is time to try again. If we add AMP, the implementation must be clean while preserving support for lower versions of PT. |
See fatchord/WaveRNN#229 for an example of how to do this. |
Closing due to lack of developer interest at this time. |
I made a branch that supports mixed precision training. It is not recommended for use at this time. For me, mixed precision training is much slower than without it, and loss is occasionally Pytorch AMP enabled (Python 3.9.7 with Anaconda, pytorch==1.10.0):
Same setup without AMP:
|
Dropping this due to poor performance and lack of interest. |
Pytorch 1.6 has native support for automatic mixed precision (AMP) training: https://pytorch.org/blog/pytorch-1.6-released/
Should we take advantage of this? In particular I think the larger batches would be nice for encoder and synthesizer training.
The text was updated successfully, but these errors were encountered: