Aggregating In-Distribution Data into Positive Examples for Safe-Semi Supervised Contrastive Learning

Date:

Semi-supervised learning methods have been suffered from performance degradation when the class distributions of labeled and unlabeled data are diferrent. Even if previous studies have tackled the problem by removing the unncessary mismatch data. They might lose the basic information that all data share regardless of class. To this end, we propose to apply self-supervised learning to leverage the whole unlabeled data. We also propose a loss function to use in-distribution data as positive examples. We evaluate our method on image classification datasets under various mismatch ratios. The results show that our method produces good representation and improves classification accuracy.