Authors:
Muhammad Tanveer
1
;
Hung Khoon Tan
1
;
Hui Fuang Ng
1
;
Maylor Karhang Leung
1
and
Joon Huang Chuah
2
Affiliations:
1
Faculty of Information and Communication Technology, Universiti Tunku Abdul Rahman, Malaysia
;
2
Faculty of Engineering, Universiti Malaya, Malaysia
Keyword(s):
Batch Contrastive Loss, Batch Regularization, Center-level Contrastive Loss, Sample-level Contrastive Loss, Neural Network.
Abstract:
As neural network becomes deeper, it becomes more capable of generating more powerful representation for a wide variety of tasks. However, deep neural network has a large number of parameters and easy to overfit the training samples. In this paper, we present a new regularization technique, called batch contrastive regularization. Regularization is performed by comparing samples collectively via contrastive loss which encourages intra-class compactness and inter-class separability in an embedded Euclidean space. To facilitate learning of embedding features for contrastive loss, a two-headed neural network architecture is used to decouple regularization classification. During inference, the regularization head is discarded and the network operates like any conventional classification network. We also introduce bag sampling to ensure sufficient positive samples for the classes in each batch. The performance of the proposed architecture is evaluated on CIFAR-10 and CIFAR-100 databases.
Our experiments show that features regularized by contrastive loss has strong generalization performance, yielding over 8% improvement on ResNet50 for CIFAR-100 when trained from scratch.
(More)