Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GPU VRAM load slowly increases #19

Open
GiannisPikoulis opened this issue Oct 2, 2022 · 1 comment
Open

GPU VRAM load slowly increases #19

GiannisPikoulis opened this issue Oct 2, 2022 · 1 comment

Comments

@GiannisPikoulis
Copy link

GiannisPikoulis commented Oct 2, 2022

Hello.

Thank you for your code. I am executing clip_finetune() on CelebA-HQ-256x256 and I am monitoring my GPU VRAM usage. I am noticing a gradual increase in VRAM while precomputing the latents from the given dataset. Is this normal? Also which part of the code is responsible for this behavior. I thought VRAM usage should remain steady throughout the training procedure and reach its peak from the beginning. Also given this behavior, for a large enough number of n_precomp_img this will eventually lead in memory overflow which is definitely not desired.

Thanks in advance.

@hniksoleimani
Copy link

it happened to me and the fineuning process stopped, also using --clip_finetine_eff eventually leads to another error:

"ValueError: Expected tensor to be a tensor image of size (C, H, W). Got tensor.size() = torch.Size([1, 3, 256, 256])."

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants