Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix DEFAULT_PRECISION handling #6492

Merged
merged 1 commit into from
Jun 14, 2024
Merged

Conversation

RyanJDick
Copy link
Collaborator

@RyanJDick RyanJDick commented Jun 6, 2024

Summary

DEFAULT_PRECISION is a torch.dtype. Previously, DEFAULT_PRECISION was compared to a str ("float32") in several places to determine the default precision for an invocation. Due to the type mismatch, these comparisons would always resolve to False. This PR fixes all of these comparison type mismatches.

This is a bugfix that results in a change to the default behaviour. In practice, this should not change the behaviour for many users, because it only causes a change for users that have configured float32 as their default precision (or have an old GPU without float16 support).

QA Instructions

  • Change configuration to precision: float32 and confirm that the affected invocations default to float32.

Merge Plan

Checklist

  • The PR has a short but descriptive title, suitable for a changelog
  • Tests added / updated (if applicable): N/A
  • Documentation added / updated (if applicable)

@github-actions github-actions bot added python PRs that change python files invocations PRs that change invocations labels Jun 6, 2024
Copy link
Collaborator

@lstein lstein left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me so far.

Base automatically changed from ryan/split-up-latent-py to main June 7, 2024 16:01
@RyanJDick RyanJDick marked this pull request as ready for review June 7, 2024 16:54
@RyanJDick
Copy link
Collaborator Author

@lstein This is ready for a full review now.

Copy link
Collaborator

@lstein lstein left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very straightforward. Looks good.

…RECISION is a torch.dtype. Previously, it was compared to a str in a number of places where it would always resolve to False. This is a bugfix that results in a change to the default behavior. In practice, this will not change the behavior for many users, because it only causes a change in behavior if a users has configured float32 as their default precision.
@hipsterusername hipsterusername merged commit 785bb1d into main Jun 14, 2024
14 checks passed
@hipsterusername hipsterusername deleted the ryan/fix-default-precision branch June 14, 2024 18:26
RyanJDick added a commit that referenced this pull request Jun 18, 2024
## Summary

No functional changes, just cleaning some things up as I touch the code.
This PR cleans up the `SilenceWarnings` context manager:
- Fix type errors
- Enable SilenceWarnings to be used as both a context manager and a
decorator
- Remove duplicate implementation
- Check the initial verbosity on `__enter__()` rather than `__init__()`
- Save an indentation level in DenoiseLatents

## QA Instructions

I generated an image to confirm that warnings are still muted.

## Merge Plan

- [x] ⚠️ Merge #6492 first,
then change the target branch to `main`.

## Checklist

- [x] _The PR has a short but descriptive title, suitable for a
changelog_
- [x] _Tests added / updated (if applicable)_
- [x] _Documentation added / updated (if applicable)_
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
invocations PRs that change invocations python PRs that change python files
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants