Releases: KevinMusgrave/pytorch-metric-learning
v1.6.1
Bug Fixes
Fixed a bug in mean_average_precision
in AccuracyCalculator
. Previously, the divisor for each sample was the number of correctly retrieved samples. In the new version, the divisor for each sample is min(k, num_relevant)
.
For example, if class "A" has 11 samples, then num_relevant
is 11 for every sample with the label "A".
- If
k = 5
, meaning that 5 nearest neighbors are retrieved for each sample, then the divisor will be 5. - If
k = 100
, meaning that 100 nearest neighbors are retrieved for each sample, then the divisor will be 11.
The bug in previous versions did not affect mean_average_precision_at_r
.
Other minor changes
Added additional shape checks to AccuracyCalculator.get_accuracy
.
v1.6.0
Features
DistributedLossWrapper
and DistributedMinerWrapper
now support ref_emb
and ref_labels
:
from pytorch_metric_learning import losses
from pytorch_metric_learning.utils import distributed as pml_dist
loss_func = losses.ContrastiveLoss()
loss_func = pml_dist.DistributedLossWrapper(loss_func)
loss = loss_func(embeddings, labels, ref_emb=ref_emb, ref_labels=ref_labels)
v1.5.2
Bug fixes
In previous versions, when embeddings_come_from_same_source == True
, the first nearest-neighbor of each query embedding was discarded, with the assumption that it must be the query embedding itself.
While this is usually the case, it's not always the case. It is possible for two different embeddings to be exactly equal to each other, and discarding the first nearest-neighbor in this case can be incorrect.
This release fixes this bug by excluding each embedding's index from the k-nn results.
Sort-of breaking changes
In order for the above bug fix to work, AccuracyCalculator
now requires that reference[:len(query)] == query
when embeddings_come_from_same_source == True
. For example, the following will raise an error:
query = torch.randn(100, 10)
ref = torch.randn(100, 10)
ref = torch.cat([ref, query], dim=0)
AC.get_accuracy(query, ref, labels1, labels2, True)
# ValueError
To fix this, move query
to the beginning of ref
:
query = torch.randn(100, 10)
ref = torch.randn(100, 10)
ref = torch.cat([query, ref], dim=0)
AC.get_accuracy(query, ref, labels1, labels2, True)
Note that this change doesn't affect the case where query is ref
.
v1.5.1
v1.5.0
Features
For some loss functions, labels are now optional if indices_tuple
is provided:
loss = loss_func(embeddings, indices_tuple=pairs)
The losses for which you can do this are:
- CircleLoss
- ContrastiveLoss
- IntraPairVarianceLoss
- GeneralizedLiftedStructureLoss
- LiftedStructureLoss
- MarginLoss
- MultiSimilarityLoss
- NTXentLoss
- SignalToNoiseRatioContrastiveLoss
- SupConLoss
- TripletMarginLoss
- TupletMarginLoss
This issue has come up several times:
v1.4.0
New features
- Added InstanceLoss. See #410 by @layumi