Skip to content

Commit

Permalink
Add comment explaining the cache make room call
Browse files Browse the repository at this point in the history
  • Loading branch information
brandonrising committed Sep 5, 2024
1 parent fed9da9 commit 667188d
Showing 1 changed file with 3 additions and 0 deletions.
3 changes: 3 additions & 0 deletions invokeai/backend/model_manager/load/model_loaders/flux.py
Original file line number Diff line number Diff line change
Expand Up @@ -199,6 +199,9 @@ def _load_from_singlefile(
if "model.diffusion_model.double_blocks.0.img_attn.norm.key_norm.scale" in sd:
sd = convert_bundle_to_flux_transformer_checkpoint(sd)
futures: list[torch.jit.Future[tuple[str, torch.Tensor]]] = []
# For the first iteration we are just requesting the current size of the state dict
# This is due to an expected doubling of the tensor sizes in memory after converting float8 -> float16
# This should be refined in the future if not removed entirely when we support more data types
sd_size = asizeof.asizeof(sd)
cache_updated = False
for k in sd.keys():
Expand Down

0 comments on commit 667188d

Please sign in to comment.