Skip to content

Commit

Permalink
Fix flaky FLUX LoRA unit test that fails occasionally due to numerica…
Browse files Browse the repository at this point in the history
…l precision.
  • Loading branch information
RyanJDick committed Sep 20, 2024
1 parent e690364 commit 1bd33f1
Showing 1 changed file with 3 additions and 1 deletion.
4 changes: 3 additions & 1 deletion tests/backend/lora/test_lora_patcher.py
Original file line number Diff line number Diff line change
Expand Up @@ -192,4 +192,6 @@ def test_apply_lora_sidecar_patches_matches_apply_lora_patches(num_layers: int):
with LoRAPatcher.apply_lora_sidecar_patches(model=model, patches=lora_models, prefix="", dtype=dtype):
output_lora_sidecar_patches = model(input)

assert torch.allclose(output_lora_patches, output_lora_sidecar_patches)
# Note: We set atol=1e-5 because the test failed occasionally with the default atol=1e-8. Slight numerical
# differences are tolerable and expected due to the difference between sidecar vs. patching.
assert torch.allclose(output_lora_patches, output_lora_sidecar_patches, atol=1e-5)

0 comments on commit 1bd33f1

Please sign in to comment.