通過潛在空間的對比損失最大限度地提高相同數據樣本的不同擴充視圖之間的一致性來學習表示。對比式自監督學習技術是一類很有前途的方法,它通過學習編碼來構建表征,編碼使兩個事物相似或不同

VIP內容

在本文中,我們提出參數對比學習(PaCo)來處理長尾識別。通過理論分析,我們發現監督對比損失在高頻類別上有偏置的傾向,從而增加了不平衡學習的難度。我們引入一組參數類學習中心,從優化的角度進行再平衡。進一步,我們分析了平衡設置下的PaCo損失。我們的分析表明,當更多的樣本被拉到相應的中心時,PaCo可以自適應地增強同類樣本的推近強度,並有利於較難的示例學習。長尾CIFAR、ImageNet、Places和iNaturalist 2018上的實驗顯示了長尾識別的新技術。在全ImageNet上,使用PaCo損失訓練的模型在各種ResNet骨幹上超過了有監督的對比學習。我們的代碼可在https://github.com/jiequancui/Parametric-Contrastive-Learning.

成為VIP會員查看完整內容
0
6
0

熱門內容

This paper presents SimCLR: a simple framework for contrastive learning of visual representations. We simplify recently proposed contrastive self-supervised learning algorithms without requiring specialized architectures or a memory bank. In order to understand what enables the contrastive prediction tasks to learn useful representations, we systematically study the major components of our framework. We show that (1) composition of data augmentations plays a critical role in defining effective predictive tasks, (2) introducing a learnable nonlinear transformation between the representation and the contrastive loss substantially improves the quality of the learned representations, and (3) contrastive learning benefits from larger batch sizes and more training steps compared to supervised learning. By combining these findings, we are able to considerably outperform previous methods for self-supervised and semi-supervised learning on ImageNet. A linear classifier trained on self-supervised representations learned by SimCLR achieves 76.5% top-1 accuracy, which is a 7% relative improvement over previous state-of-the-art, matching the performance of a supervised ResNet-50. When fine-tuned on only 1% of the labels, we achieve 85.8% top-5 accuracy, outperforming AlexNet with 100X fewer labels.

0
17
1
下載
預覽

最新論文

Outlier detection is one of the most important processes taken to create good, reliable data in machine learning. The most methods of outlier detection leverage an auxiliary reconstruction task by assuming that outliers are more difficult to be recovered than normal samples (inliers). However, it is not always true, especially for auto-encoder (AE) based models. They may recover certain outliers even outliers are not in the training data, because they do not constrain the feature learning. Instead, we think outlier detection can be done in the feature space by measuring the feature distance between outliers and inliers. We then propose a framework, MCOD, using a memory module and a contrastive learning module. The memory module constrains the consistency of features, which represent the normal data. The contrastive learning module learns more discriminating features, which boosts the distinction between outliers and inliers. Extensive experiments on four benchmark datasets show that our proposed MCOD achieves a considerable performance and outperforms nine state-of-the-art methods.

0
0
0
下載
預覽
Top