Open
Description
Hello.
Thank you for your code. I am executing clip_finetune()
on CelebA-HQ-256x256 and I am monitoring my GPU VRAM usage. I am noticing a gradual increase in VRAM while precomputing the latents from the given dataset. Is this normal? Also which part of the code is responsible for this behavior. I thought VRAM usage should remain steady throughout the training procedure and reach its peak from the beginning. Also given this behavior, for a large enough number of n_precomp_img
this will eventually lead in memory overflow which is definitely not desired.
Thanks in advance.
Metadata
Metadata
Assignees
Labels
No labels