You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I've been looking at the face segmentation masking feature. IIUC it should work such that only the face is being learnt and nothing else. So the loss of non-face pixels should be masked to 0.
However, when running the example use_face_conditioning_example.sh, and printing the mask values after
Hi, I've been looking at the face segmentation masking feature. IIUC it should work such that only the face is being learnt and nothing else. So the loss of non-face pixels should be masked to 0.
However, when running the example
use_face_conditioning_example.sh
, and printing the mask values afterlora/lora_diffusion/cli_lora_pti.py
Lines 342 to 358 in bdd51b0
torch.unique(mask, return_counts=True)
, I'm seeing the lowest value is 0.35.(tensor([0.3522, 0.3603, 0.4087, 0.4490, 1.0000], device='cuda:0'), tensor([6204, 1, 13, 13, 169], device='cuda:0'))
I see that the mask values are being adjusted here https://github.com/cloneofsimo/lora/blob/master/lora_diffusion/dataset.py#L288-L295
They're first being normalized to 0.5 mean, 0.5 std, and then multiplied by 0.5 and 1 is added, resulting in a 1.25 mean. Is this intended?
The text was updated successfully, but these errors were encountered: