Skip to content

Releases: KevinMusgrave/pytorch-metric-learning

v1.6.1

20 Sep 13:16
b2df1f3
Compare
Choose a tag to compare

Bug Fixes

Fixed a bug in mean_average_precision in AccuracyCalculator. Previously, the divisor for each sample was the number of correctly retrieved samples. In the new version, the divisor for each sample is min(k, num_relevant).

For example, if class "A" has 11 samples, then num_relevant is 11 for every sample with the label "A".

  • If k = 5, meaning that 5 nearest neighbors are retrieved for each sample, then the divisor will be 5.
  • If k = 100, meaning that 100 nearest neighbors are retrieved for each sample, then the divisor will be 11.

The bug in previous versions did not affect mean_average_precision_at_r.

Other minor changes

Added additional shape checks to AccuracyCalculator.get_accuracy.

v1.6.0

03 Sep 19:54
10fd517
Compare
Choose a tag to compare

Features

DistributedLossWrapper and DistributedMinerWrapper now support ref_emb and ref_labels:

from pytorch_metric_learning import losses
from pytorch_metric_learning.utils import distributed as pml_dist

loss_func = losses.ContrastiveLoss()
loss_func = pml_dist.DistributedLossWrapper(loss_func)

loss = loss_func(embeddings, labels, ref_emb=ref_emb, ref_labels=ref_labels)

Thanks @NoTody for PR #503

v1.5.2

03 Aug 17:49
Compare
Choose a tag to compare

Bug fixes

In previous versions, when embeddings_come_from_same_source == True, the first nearest-neighbor of each query embedding was discarded, with the assumption that it must be the query embedding itself.

While this is usually the case, it's not always the case. It is possible for two different embeddings to be exactly equal to each other, and discarding the first nearest-neighbor in this case can be incorrect.

This release fixes this bug by excluding each embedding's index from the k-nn results.

Sort-of breaking changes

In order for the above bug fix to work, AccuracyCalculator now requires that reference[:len(query)] == query when embeddings_come_from_same_source == True. For example, the following will raise an error:

query = torch.randn(100, 10)
ref = torch.randn(100, 10)
ref = torch.cat([ref, query], dim=0)
AC.get_accuracy(query, ref, labels1, labels2, True)
# ValueError

To fix this, move query to the beginning of ref:

query = torch.randn(100, 10)
ref = torch.randn(100, 10)
ref = torch.cat([query, ref], dim=0)
AC.get_accuracy(query, ref, labels1, labels2, True)

Note that this change doesn't affect the case where query is ref.

v1.5.1

16 Jul 19:39
Compare
Choose a tag to compare

Bug fixes

Bumped the record-keeper version to fix issue #497

v1.5.0

29 Jun 21:58
Compare
Choose a tag to compare

Features

For some loss functions, labels are now optional if indices_tuple is provided:

loss = loss_func(embeddings, indices_tuple=pairs)

The losses for which you can do this are:

  • CircleLoss
  • ContrastiveLoss
  • IntraPairVarianceLoss
  • GeneralizedLiftedStructureLoss
  • LiftedStructureLoss
  • MarginLoss
  • MultiSimilarityLoss
  • NTXentLoss
  • SignalToNoiseRatioContrastiveLoss
  • SupConLoss
  • TripletMarginLoss
  • TupletMarginLoss

This issue has come up several times:

#412
#490
#482
#473
#179
#263

v1.4.0

09 Jun 18:03
6f65823
Compare
Choose a tag to compare

New features

v1.3.2

29 May 15:26
Compare
Choose a tag to compare

Bug fixes

  • Fixed a bug in BatchEasyHardMiner where get_max_per_row was not always returning correct values, resulting in invalid pairs and triplets. #476

v1.3.1

27 May 10:51
Compare
Choose a tag to compare

Bug fixes

  • Fixed ThresholdReducer being incompatible with older versions of PyTorch (#465)
  • Fixed VICRegLoss being incompatible with older versions of PyTorch, and missing a division by 2 (#467 and #470 by @cwkeam)

Other

  • Made CustomKNN more memory efficient by removing torch.cat call.

v1.3.0

30 Mar 08:31
Compare
Choose a tag to compare

Features

  • Added a batch_size parameter to CustomKNN. This computes k-nn per batch of query embeddings (using BatchedDistance), which requires less memory than computing the entire distance matrix at once.

Bug Fixes

v1.2.1

17 Mar 04:13
a142af3
Compare
Choose a tag to compare

Bug Fixes