You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In this case, the two vectors can have arbitrary lengths and the dot product does not capture their cosine similarity as done in OpenAI's CLIP implementation. Do you have any intuition why you did not do L2 normalization instead of LayerNorm / why LayerNorm was your preferred choice?
The text was updated successfully, but these errors were encountered:
Yes, you're right. Normalizing the features before calculating the loss is a better option than relying on LayerNorm to fix for this. Will update the code to add this. Also, contributions are welcomed! Thanks a lot.
Hi, thanks for open-sourcing your code. I noticed that your text and image vectors which you used to compute the logits are not unit normalized vectors. https://github.com/moein-shariatnia/OpenAI-CLIP/blob/e2c5bb3859d7478752af8c69862f63b1afe4a9cb/modules.py#L68 .
In this case, the two vectors can have arbitrary lengths and the dot product does not capture their cosine similarity as done in OpenAI's CLIP implementation. Do you have any intuition why you did not do L2 normalization instead of LayerNorm / why LayerNorm was your preferred choice?
The text was updated successfully, but these errors were encountered: