1. 首页
  2. 人工智能
  3. 论文/代码
  4. Contrastive Clustering

Contrastive Clustering

上传者: 2021-01-24 08:33:12上传 .PDF文件 3.43 MB 热度 21次

Contrastive Clustering

In this paper, we propose a one-stage online clustering method called Contrastive Clustering (CC) which explicitly performs the instance- and cluster-level contrastive learning. To be specific, for a given dataset, the positive and negative instance pairs are constructed through data augmentations and then projected into a feature space.Therein, the instance- and cluster-level contrastive learning are respectively conducted in the row and column space by maximizing the similarities of positive pairs while minimizing those of negative ones. Our key observation is that the rows of the feature matrix could be regarded as soft labels of instances, and accordingly the columns could be further regarded as cluster representations. By simultaneously optimizing the instance- and cluster-level contrastive loss, the model jointly learns representations and cluster assignments in an end-to-end manner. Extensive experimental results show that CC remarkably outperforms 17 competitive clustering methods on six challenging image benchmarks. In particular, CC achieves an NMI of 0.705 (0.431) on the CIFAR-10 (CIFAR-100) dataset, which is an up to 19\% (39\%) performance improvement compared with the best baseline.

对比聚类

在本文中,我们提出了一种称为对比性聚类(CC)的单阶段在线聚类方法,该方法可显式执行实例级和聚类级的对比学习。具体来说,对于给定的数据集,通过数据扩充来构造正例实例和负例实例对,然后将其投影到特征空间中。.. 其中,通过最大化正对的相似度同时最小化负对的相似度,分别在行和列空间中进行实例级和簇级的对比学习。我们的主要观察结果是,特征矩阵的行可以视为实例的软标签,因此,列可以进一步视为聚类表示。通过同时优化实例级和群集级的对比损失,该模型以端到端的方式共同学习表示形式和群集分配。大量的实验结果表明,在六个具有挑战性的图像基准测试中,CC的性能明显优于17种竞争性聚类方法。特别是,CC在CIFAR-10(CIFAR-100)数据集上的NMI为0.705(0.431), (阅读更多)

用户评论