Read&Quote: “Label-Efficient Multi-Task Segmentation using Contrastive Learning”
Recent advances in convolutional neural networks (CNN) have yielded stateof-the-art segmentation results for both 2D and 3D medical images , a significant step toward fully automated segmentation. However, this level of performance is only possible when sufficient amount of labeled data is available. Furthermore, obtaining annotations from medical experts is both expensive and time-consuming, despite its importance for training CNNs. Thus, methods that utilize small labeled datasets have been explored extensively.
Specifically, multitask learning has been considered as an efficient method for small data since parameter sharing for both the main segmentation task and regularization subtask could reduce the risk of overfitting.
We propose a novel method for tumor segmentation by utilizing contrastive learning as a subtask for the main segmentation model. — We experimentally show that our proposed method, combined with a semisupervised approach which utilizes unlabeled data, could enhance segmentation performance when the amount of labeled data is small.
Self-supervised learning, where a representation is learned from unlabeled data by predicting missing input data based on other parts of the input, has become a promising method for learning representations; it is useful for downstream tasks such as classification [9,7] and segmentation .
Recently, CPC has been proposed as a self-supervised method that can outperform the fully supervised methods in ImageNet classification tasks in the small labeled data regime