Description
Hi,
is it expected that the MultiSimilarityMiner will produce positive pairs that don't actually have the same label?
For example one of my batches has items with the following labels (this is with a small batch size of only 8 just to illustrate the problem):
tensor([ 15, 15, 15, 15, 169, 169, 169, 169], device='mps:0')
I use the MultiSimilarityMiner to mine pairs for MultiSimilarityLoss. If i print out the values of mat, pos_mask, and neg_mask in the compute_loss function of MultiSimilarityLoss, they are
tensor([[1.0000, 0.9996, 0.9975, 0.9994, 0.9948, 0.9836, 0.9968, 0.9975],
[0.9996, 1.0000, 0.9952, 0.9981, 0.9919, 0.9798, 0.9950, 0.9963],
[0.9975, 0.9952, 1.0000, 0.9991, 0.9977, 0.9879, 0.9993, 0.9980],
[0.9994, 0.9981, 0.9991, 1.0000, 0.9974, 0.9876, 0.9979, 0.9975],
[0.9948, 0.9919, 0.9977, 0.9974, 1.0000, 0.9960, 0.9947, 0.9917],
[0.9836, 0.9798, 0.9879, 0.9876, 0.9960, 1.0000, 0.9823, 0.9767],
[0.9968, 0.9950, 0.9993, 0.9979, 0.9947, 0.9823, 1.0000, 0.9992],
[0.9975, 0.9963, 0.9980, 0.9975, 0.9917, 0.9767, 0.9992, 1.0000]],
device='mps:0', grad_fn=<MmBackward0>)
tensor([[0., 1., 1., 1., 1., 1., 1., 1.],
[1., 0., 1., 1., 1., 1., 1., 1.],
[1., 1., 0., 1., 1., 1., 1., 1.],
[1., 1., 1., 0., 1., 1., 1., 1.],
[1., 1., 1., 1., 0., 1., 1., 1.],
[1., 1., 1., 1., 1., 0., 1., 1.],
[1., 1., 1., 1., 1., 1., 0., 1.],
[1., 1., 1., 1., 1., 1., 1., 0.]], device='mps:0')
tensor([[0., 0., 0., 0., 1., 1., 1., 1.],
[0., 0., 0., 0., 1., 1., 1., 1.],
[0., 0., 0., 0., 1., 1., 1., 1.],
[0., 0., 0., 0., 1., 1., 1., 1.],
[1., 1., 1., 1., 0., 0., 0., 0.],
[1., 1., 1., 1., 0., 0., 0., 0.],
[1., 1., 1., 1., 0., 0., 0., 0.],
[1., 1., 1., 1., 0., 0., 0., 0.]], device='mps:0')
This is right at the beginning of training so the similarity scores in mat are total garbage, but the pos_mask looks wrong to me. It has selected every pair as positive, including those that don't share the same ID. Is that expected for some reason?