Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Training takes too long #198

Open
BiEchi opened this issue Feb 1, 2024 · 1 comment
Open

Training takes too long #198

BiEchi opened this issue Feb 1, 2024 · 1 comment

Comments

@BiEchi
Copy link

BiEchi commented Feb 1, 2024

It seems to take 40h to run the training, and it takes only 600M CUDA Memory. Is it normal?

Activations trained on: 0%| | 4997120/1000000000 [12:35<40:55:48, 6752.73it/s, stage=train]

@BiEchi
Copy link
Author

BiEchi commented Feb 1, 2024

Let me rephrase my question: what's the file that deals with multi-GPU training for the autoencoder?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant