Understanding Collapse in Non-Contrastive Siamese Representation Learning
Understanding Collapse in Non-Contrastive Siamese Representation Learning

Alexander C. Li
CMU
Alexei A. Efros
UC Berkeley
Deepak Pathak
CMU
ECCV 2022
[Paper]
[GitHub Code]


Abstract

Contrastive methods have led a recent surge in the performance of self-supervised representation learning (SSL). Recent methods like BYOL or SimSiam purportedly distill these contrastive methods down to their essence, removing bells and whistles, including the negative examples, that do not contribute to downstream performance. These "non-contrastive" methods work surprisingly well without using negatives even though the global minimum lies at trivial collapse. We empirically analyze these non-contrastive methods and find that SimSiam is extraordinarily sensitive to dataset and model size. In particular, SimSiam representations undergo partial dimensional collapse if the model is too small relative to the dataset size. We propose a metric to measure the degree of this collapse and show that it can be used to forecast the downstream task performance without any fine-tuning or labels. We further analyze architectural design choices and their effect on the downstream performance. Finally, we demonstrate that shifting to a continual learning setting acts as a regularizer and prevents collapse, and a hybrid between continual and multi-epoch training can improve linear probe accuracy by as many as 18 percentage points using ResNet-18 on ImageNet.


Source Code

We have released our PyTorch code on the github page. Try our code!
[GitHub]


Paper and Bibtex

Paper thumbnail.

Citation
 
Alexander C. Li, Alexei A. Efros, Deepak Pathak. Understanding Collapse in Non-Contrastive Siamese Representation Learning. ECCV 2022.

[Paper] [ArXiv] [Bibtex]
@article{SimSiamCollapse,
     title={Understanding Collapse in Non-Contrastive 
            Siamese Representation Learning},
     author={Li, Alexander Cong and Efros, Alexei A. and Pathak, Deepak},
     journal={ECCV},     
     year={2022}
}


Acknowledgements

This work was supported in part by the NSF GRFP (grants DGE1745016 and DGE2140739), NSF IIS-2024594, and ONR N00014-22-1-2096.