NTXent loss when more than 2 positive samples? #698
-
I need clarification about how the loss is computed in the case we have more than 2 positive samples ? I imagine that the reducer will somehow mean every possible loss, but not sure how it works. For example, if I have some features F in a tensor size (latent_size,6) and the labels for the pairs are labels = (1,1,1,2,2,2), it means each "1" is positive to the other and each 1,2 are negative pairs, and each "2" are positives with one another. |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 1 reply
-
I'm busy right now but I'll try to get back to you later today. In the meantime, let me know if the section "How exactly is the NTXentLoss computed?" helps in the docs: https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#ntxentloss |
Beta Was this translation helpful? Give feedback.
-
It'll be the average of:
|
Beta Was this translation helpful? Give feedback.
It'll be the average of: