Skip to content

Commit bee27da

Browse files
Merge pull request #747 from KevinMusgrave/dev
v2.9.0
2 parents 216a792 + 4fdd80f commit bee27da

20 files changed

+375
-42
lines changed

CONTENTS.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -46,6 +46,7 @@
4646
| [**ProxyNCALoss**](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#proxyncaloss) | [No Fuss Distance Metric Learning using Proxies](https://arxiv.org/pdf/1703.07464.pdf)
4747
| [**RankedListLoss**](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#rankedlistloss) | [Ranked List Loss for Deep Metric Learning](https://arxiv.org/abs/1903.03238)
4848
| [**SignalToNoiseRatioContrastiveLoss**](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#signaltonoiseratiocontrastiveloss) | [Signal-to-Noise Ratio: A Robust Distance Metric for Deep Metric Learning](http://openaccess.thecvf.com/content_CVPR_2019/papers/Yuan_Signal-To-Noise_Ratio_A_Robust_Distance_Metric_for_Deep_Metric_Learning_CVPR_2019_paper.pdf)
49+
| [**SmoothAPLoss**](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#smoothaploss) | [Smooth-AP: Smoothing the Path Towards Large-Scale Image Retrieval](https://arxiv.org/abs/2007.12163)
4950
| [**SoftTripleLoss**](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#softtripleloss) | [SoftTriple Loss: Deep Metric Learning Without Triplet Sampling](http://openaccess.thecvf.com/content_ICCV_2019/papers/Qian_SoftTriple_Loss_Deep_Metric_Learning_Without_Triplet_Sampling_ICCV_2019_paper.pdf)
5051
| [**SphereFaceLoss**](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#spherefaceloss) | [SphereFace: Deep Hypersphere Embedding for Face Recognition](https://arxiv.org/pdf/1704.08063.pdf)
5152
| [**SubCenterArcFaceLoss**](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#subcenterarcfaceloss) | [Sub-center ArcFace: Boosting Face Recognition by Large-scale Noisy Web Faces](https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123560715.pdf)

README.md

Lines changed: 8 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -18,6 +18,11 @@
1818

1919
## News
2020

21+
**August 17**: v2.9.0
22+
- Added [SmoothAPLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#smoothaploss).
23+
- Improved SubCenterArcFaceLoss and GenericPairLoss.
24+
- Thank you [ir2718](https://github.com/ir2718), [lucamarini22](https://github.com/lucamarini22), and [marcpaga](https://github.com/marcpaga).
25+
2126
**December 11**: v2.8.0
2227
- Added the [Datasets](https://kevinmusgrave.github.io/pytorch-metric-learning/datasets) module for easy downloading of common datasets:
2328
- [CUB200](https://kevinmusgrave.github.io/pytorch-metric-learning/datasets/#cub-200-2011)
@@ -26,10 +31,6 @@
2631
- [Stanford Online Products](https://kevinmusgrave.github.io/pytorch-metric-learning/datasets/#stanfordonlineproducts)
2732
- Thank you [ir2718](https://github.com/ir2718).
2833

29-
**November 2**: v2.7.0
30-
- Added [ThresholdConsistentMarginLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#thresholdconsistentmarginloss).
31-
- Thank you [ir2718](https://github.com/ir2718).
32-
3334
## Documentation
3435
- [**View the documentation here**](https://kevinmusgrave.github.io/pytorch-metric-learning/)
3536
- [**View the installation instructions here**](https://github.com/KevinMusgrave/pytorch-metric-learning#installation)
@@ -231,7 +232,7 @@ Thanks to the contributors who made pull requests!
231232
|[domenicoMuscill0](https://github.com/domenicoMuscill0)| - [ManifoldLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#manifoldloss) <br/> - [P2SGradLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#p2sgradloss) <br/> - [HistogramLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#histogramloss) <br/> - [DynamicSoftMarginLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#dynamicsoftmarginloss) <br/> - [RankedListLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#rankedlistloss) |
232233
|[mlopezantequera](https://github.com/mlopezantequera) | - Made the [testers](https://kevinmusgrave.github.io/pytorch-metric-learning/testers) work on any combination of query and reference sets <br/> - Made [AccuracyCalculator](https://kevinmusgrave.github.io/pytorch-metric-learning/accuracy_calculation/) work with arbitrary label comparisons |
233234
|[cwkeam](https://github.com/cwkeam) | - [SelfSupervisedLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#selfsupervisedloss) <br/> - [VICRegLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#vicregloss) <br/> - Added mean reciprocal rank accuracy to [AccuracyCalculator](https://kevinmusgrave.github.io/pytorch-metric-learning/accuracy_calculation/) <br/> - BaseLossWrapper|
234-
| [ir2718](https://github.com/ir2718) | - [ThresholdConsistentMarginLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#thresholdconsistentmarginloss) <br/> - The [Datasets](https://kevinmusgrave.github.io/pytorch-metric-learning/datasets) module |
235+
| [ir2718](https://github.com/ir2718) | - [ThresholdConsistentMarginLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#thresholdconsistentmarginloss) <br/> - [SmoothAPLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#smoothaploss) <br/> - The [Datasets](https://kevinmusgrave.github.io/pytorch-metric-learning/datasets) module |
235236
|[marijnl](https://github.com/marijnl)| - [BatchEasyHardMiner](https://kevinmusgrave.github.io/pytorch-metric-learning/miners/#batcheasyhardminer) <br/> - [TwoStreamMetricLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/trainers/#twostreammetricloss) <br/> - [GlobalTwoStreamEmbeddingSpaceTester](https://kevinmusgrave.github.io/pytorch-metric-learning/testers/#globaltwostreamembeddingspacetester) <br/> - [Example using trainers.TwoStreamMetricLoss](https://github.com/KevinMusgrave/pytorch-metric-learning/blob/master/examples/notebooks/TwoStreamMetricLoss.ipynb) |
236237
| [chingisooinar](https://github.com/chingisooinar) | [SubCenterArcFaceLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#subcenterarcfaceloss) |
237238
| [elias-ramzi](https://github.com/elias-ramzi) | [HierarchicalSampler](https://kevinmusgrave.github.io/pytorch-metric-learning/samplers/#hierarchicalsampler) |
@@ -252,6 +253,8 @@ Thanks to the contributors who made pull requests!
252253
| [stompsjo](https://github.com/stompsjo) | Improved documentation for NTXentLoss. |
253254
| [Puzer](https://github.com/Puzer) | Bug fix for PNPLoss. |
254255
| [elisim](https://github.com/elisim) | Developer improvements to DistributedLossWrapper. |
256+
| [lucamarini22](https://github.com/lucamarini22) | |
257+
| [marcpaga]((https://github.com/marcpaga) | |
255258
| [GaetanLepage](https://github.com/GaetanLepage) | |
256259
| [z1w](https://github.com/z1w) | |
257260
| [thinline72](https://github.com/thinline72) | |
60.2 KB
Loading
19.7 KB
Loading
28.7 KB
Loading

docs/losses.md

Lines changed: 31 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1087,6 +1087,37 @@ losses.SignalToNoiseRatioContrastiveLoss(pos_margin=0, neg_margin=1, **kwargs):
10871087
* **pos_loss**: The loss per positive pair in the batch. Reduction type is ```"pos_pair"```.
10881088
* **neg_loss**: The loss per negative pair in the batch. Reduction type is ```"neg_pair"```.
10891089

1090+
## SmoothAPLoss
1091+
[Smooth-AP: Smoothing the Path Towards Large-Scale Image Retrieval](https://arxiv.org/abs/2007.12163){target=_blank}
1092+
1093+
```python
1094+
losses.SmoothAPLoss(
1095+
temperature=0.01,
1096+
**kwargs
1097+
)
1098+
```
1099+
1100+
**Equations**:
1101+
1102+
![smooth_ap_loss_equation1](imgs/smooth_ap_sigmoid_equation.png){: style="height:100px"}
1103+
![smooth_ap_loss_equation2](imgs/smooth_ap_approx_equation.png){: style="height:100px"}
1104+
![smooth_ap_loss_equation3](imgs/smooth_ap_loss_equation.png){: style="height:100px"}
1105+
1106+
1107+
**Parameters**:
1108+
1109+
* **temperature**: The desired temperature for scaling the sigmoid function. This is denoted by $\tau$ in the first and second equations.
1110+
1111+
1112+
**Other info**:
1113+
1114+
* The loss requires the same number of number of elements for each class in the batch labels. An example of valid labels is: `[1, 1, 2, 2, 3, 3]`. An example of invalid labels is `[1, 1, 1, 2, 2, 3, 3]` because there are `3` elements with the value `1`. This can be achieved by using [`samplers.MPerClassSampler`](samplers.md/#mperclasssampler) and setting the `batch_size` and `m` hyperparameters.
1115+
1116+
**Default distance**:
1117+
1118+
- [```CosineSimilarity()```](distances.md#cosinesimilarity)
1119+
- This is the only compatible distance.
1120+
10901121
## SoftTripleLoss
10911122
[SoftTriple Loss: Deep Metric Learning Without Triplet Sampling](http://openaccess.thecvf.com/content_ICCV_2019/papers/Qian_SoftTriple_Loss_Deep_Metric_Learning_Without_Triplet_Sampling_ICCV_2019_paper.pdf){target=_blank}
10921123
```python
Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1 @@
1-
__version__ = "2.8.1"
1+
__version__ = "2.9.0"

src/pytorch_metric_learning/datasets/__init__.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,4 +2,4 @@
22
from .cars196 import Cars196
33
from .cub import CUB
44
from .inaturalist2018 import INaturalist2018
5-
from .sop import StanfordOnlineProducts
5+
from .sop import StanfordOnlineProducts

src/pytorch_metric_learning/distances/dot_product_similarity.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ def __init__(self, **kwargs):
99
assert self.is_inverted
1010

1111
def compute_mat(self, query_emb, ref_emb):
12-
return torch.matmul(query_emb, ref_emb.t())
12+
return torch.matmul(query_emb, ref_emb.transpose(-1, -2))
1313

1414
def pairwise_distance(self, query_emb, ref_emb):
1515
return torch.sum(query_emb * ref_emb, dim=1)

src/pytorch_metric_learning/losses/__init__.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -30,6 +30,7 @@
3030
from .ranked_list_loss import RankedListLoss
3131
from .self_supervised_loss import SelfSupervisedLoss
3232
from .signal_to_noise_ratio_losses import SignalToNoiseRatioContrastiveLoss
33+
from .smooth_ap import SmoothAPLoss
3334
from .soft_triple_loss import SoftTripleLoss
3435
from .sphereface_loss import SphereFaceLoss
3536
from .subcenter_arcface_loss import SubCenterArcFaceLoss

0 commit comments

Comments
 (0)