You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Unfortunately there isn't a way to pass in a custom label comparison function into miners or loss functions. It would be a good idea to add this feature though, so I will keep this issue open.
Edit:
Actually I think you can write a miner to accomplish what you're talking about:
frompytorch_metric_learning.minersimportBaseMinerclassCustomMiner(BaseMiner):
defmine(self, embeddings, labels, ref_emb, ref_labels):
# compare labels and ref_labels however you want# return a tuple (a1, p, a2, n)# where (a1, p) are the positive pair indices# and (a2, n) are the negative pair indicesminer=CustomMiner()
pairs=miner(embeddings, labels)
loss=loss_fn(embeddings, indices_tuple=pairs)
It's not ideal but it's the only workaround I can think of.
It is a great package that improves my efficiency , When I test cifar10, for the one-hot label , I can use
to transform one-hot label but When I test it on some one-hot label I can't find a nice method to deal with a multi-label dataset.
at first, I saw this issue it tells me a way to put in multi-label, but I want to further custom it because I need to construct a similarity matrix
I hope to receive a response from you soon. Thank you.
The text was updated successfully, but these errors were encountered: