You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The output img of FaceAnalysis is BGR image,but the codes in IP-Adapter-FaceID and hugging face docs both use it as the input for the CLIP image encoder (it should be an RGB image).
It may lead to some promblems like unexpected blue hair.
The text was updated successfully, but these errors were encountered:
I'm not sure that this snippet is not quite correct. If we take hugginface code then your code will save crops into ip_adapter_images list in BGR format and pass it to prepare_ip_adapter_image_embeds fn.
...
#insightface embeddings and crop extraction
...
clip_embeds = pipeline.prepare_ip_adapter_image_embeds(
[ip_adapter_images], None, torch.device("cuda"), num_images, True)[0]
pipeline.unet.encoder_hid_proj.image_projection_layers[0].clip_embeds = clip_embeds.to(dtype=torch.float16)
pipeline.unet.encoder_hid_proj.image_projection_layers[0].shortcut = False # True if Plus v2
However if i'm not mistaken, prepare_ip_adapter_image_embeds() function accept RGB image.
I would rather suggest to get insightface embs from BGR image but crop original RGB one
The output img of
FaceAnalysis
is BGR image,but the codes in IP-Adapter-FaceID and hugging face docs both use it as the input for the CLIP image encoder (it should be an RGB image).It may lead to some promblems like unexpected blue hair.
The text was updated successfully, but these errors were encountered: