Abstract
Deep learning models have achieved great success on various vision challenges, but a well-trained model would face drastic performance degradation when applied to unseen data. Since the model is sensitive to domain shift, unsupervised domain adaption attempts to reduce the domain gap and avoid costly annotation of unseen domains. This paper proposes a novel framework for cross-modality segmentation via similarity-based prototypes. In specific, we learn class-wise prototypes within an embedding space, then introduce a similarity constraint to make these prototypes representative for each semantic class while separable from different classes. Moreover, we use dictionaries to store prototypes extracted from different images, which prevents the class-missing problem and enables the contrastive learning of prototypes, and further improves performance. Extensive experiments show that our method achieves better results than other state-of-the-art methods.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Xue, Y., Feng, S., Zhang, Y., Zhang, X., Wang, Y.: Dual-task self-supervision for cross-modality domain adaptation. In: Martel, A.L., et al. (eds.) MICCAI 2020. LNCS, vol. 12261, pp. 408–417. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-59710-8_40
Chen, C., Dou, Q., Chen, H., et al.: Unsupervised bidirectional cross-modality adaptation via deeply synergistic image and feature alignment for medical image segmentation. IEEE Trans. Med. Imag. 39(7), 2494–2505 (2020)
Chen, C., Dou, Q., Chen, H., et al.: Synergistic image and feature adaptation: towards cross-modality domain adaptation for medical image segmentation. In: AAAI, vol. 33 no. 01, pp. 865–872 (2019)
Zou, D., Zhu, Q., Yan, P.: Unsupervised domain adaptation with dual-scheme fusion network for medical image segmentation. In: IJCAI, pp. 3291–3298 (2020)
Vesal, S., Gu, M., Kosti, R., et al.: Adapt everywhere: unsupervised adaptation of point-clouds and entropy minimisation for multi-modal cardiac image segmentation. IEEE Trans. Med. Imag. (2021). https://ieeexplore.ieee.org/document/9380742
Liu, Z., Zhu, Z., Zheng, S., et al.: Margin Preserving Self-paced Contrastive Learning Towards Domain Adaptation for Medical Image Segmentation. arXiv preprint arXiv:2103.08454 (2021)
Marsden, R.A., Bartler, A., Döbler, M., et al.: Contrastive Learning and Self-Training for Unsupervised Domain Adaptation in Semantic Segmentation. arXiv preprint arXiv:2105.02001 (2021)
Chung, I., Kim, D., Kwak, N.: Maximizing Cosine Similarity Between Spatial Features for Unsupervised Domain Adaptation in Semantic Segmentation. arXiv preprint arXiv:2102.13002 (2021)
Tomar, D., Lortkipanidze, M., Vray, G., et al.: Self-attentive spatial adaptive normalization for cross-modality domain adaptation. IEEE Trans. Med. Imag. (2021). https://ieeexplore.ieee.org/document/9354186
Chen, Y.C., Lin, Y.Y., Yang, M.H., et al.: Crdoco: pixel-level domain transfer with cross-domain consistency. In: CVPR, pp. 1791–1800 (2019)
Wang, J., Huang, H., Chen, C., Ma, W., Huang, Y., Ding, X.: Multi-sequence cardiac MR segmentation with adversarial domain adaptation network. In: Pop, M., et al. (eds.) STACOM 2019. LNCS, vol. 12009, pp. 254–262. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-39074-7_27
Zhu, J.Y., Park, T., Isola, P., et al.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: ICCV., pp. 2223–2232 (2017)
Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28
Mao, X., Li, Q., Xie, H., et al.: Least squares generative adversarial networks. In: CVPR, pp. 2794–2802 (2017)
Zhuang, X., Shen, J.: Multi-scale patch and multi-modality atlases for whole heart segmentation of MRI. Med. Image Anal. 31, 77–87 (2016)
Dou, Q., Ouyang, C., Chen, C., et al.: PnP-AdaNet: plug-and-play adversarial domain adaptation network at unpaired cross-modality cardiac segmentation. IEEE Access 7, 99065–99076 (2019)
Isola, P., Zhu, J.Y., Zhou, T., et al.: Image-to-image translation with conditional adversarial networks. In: CVPR, pp. 1125–1134 (2017)
Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
Zhang, Q., Zhang, J., Liu, W., et al.: Category anchor-guided unsupervised domain adaptation for semantic segmentation. arXiv preprint arXiv:1910.13049 (2019)
Li, Y., Yuan, L., Vasconcelos, N.: Bidirectional learning for domain adaptation of semantic segmentation. In: CVPR, pp. 6936–6945 (2019)
Cycada: cycle-consistent adversarial domain adaptation. In: ICML, pp. 1994–2003 (2018)
Tsai, Y.H., Hung, W.C., Schulter, S., et al.: Learning to adapt structured output space for semantic segmentation. In: CVPR, pp. 7472–7481 (2018)
Huo, Y., Xu, Z., Moon, H., et al.: Synseg-Net: synthetic segmentation without target modality ground truth. IEEE Trans. Med. Imag. 38(4), 1016–1025 (2018)
Shen, D., Wu, G., Suk, H.I.: Deep learning in medical image analysis. Ann. Rev. Biomed. Eng. 19, 221–248 (2017)
Van der Maaten, L., Hinton, G.: Visualizing data using t-SNE. J. Mach. Learn. Res. 9(11) (2008). https://www.jmlr.org/papers/v9/vandermaaten08a.html
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Ye, Z., Ju, C., Ma, C., Zhang, X. (2021). Unsupervised Domain Adaption via Similarity-Based Prototypes for Cross-Modality Segmentation. In: Albarqouni, S., et al. Domain Adaptation and Representation Transfer, and Affordable Healthcare and AI for Resource Diverse Global Health. DART FAIR 2021 2021. Lecture Notes in Computer Science(), vol 12968. Springer, Cham. https://doi.org/10.1007/978-3-030-87722-4_13
Download citation
DOI: https://doi.org/10.1007/978-3-030-87722-4_13
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-87721-7
Online ISBN: 978-3-030-87722-4
eBook Packages: Computer ScienceComputer Science (R0)