Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix bug with cache_vae_outputs and SDXL #25

Merged
merged 2 commits into from
Aug 30, 2023
Merged

Conversation

RyanJDick
Copy link
Collaborator

@RyanJDick RyanJDick commented Aug 25, 2023

Prior to this fix, when cache_vae_outputs was enabled when training with SDXL, an exception would be raised because original_size_hw and crop_top_left_yx. This change caches those fields along with the vae_outputs.

  • Tested SD v1, cache_vae_outputs=True, cache_text_encoder_outputs=True
  • Tested SD v1, cache_vae_outputs=False, cache_text_encoder_outputs=False
  • Tested SDXL, cache_vae_outputs=True, cache_text_encoder_outputs=True
  • Tested SDXL, cache_vae_outputs=False, cache_text_encoder_outputs=False

# The text encoder output may have been cached and included in the data_batch. If not, we calculate it here.
if "prompt_embeds" in data_batch:
prompt_embeds = data_batch["prompt_embeds"]
pooled_prompt_embeds = data_batch["pooled_prompt_embeds"]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I assume this will always exist when prompt_embeds does

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Correct. If "pooled_prompt_embeds" wasn't set we'd want to raise an Exception.

@RyanJDick RyanJDick merged commit 90bf97f into main Aug 30, 2023
1 check passed
@RyanJDick RyanJDick deleted the ryan/bugfix-vae-cache-sdxl branch August 30, 2023 18:49
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants