Abstract
Source-free Unsupervised Domain Adaptation (SUDA) approaches inherently exhibit catastrophic forgetting. Typically, models trained on a labeled source domain and adapted to unlabeled target data improve performance on the target while dropping performance on the source, which is not available during adaptation. In this study, our goal is to cope with the challenging problem of SUDA in a continual learning setting, i.e., adapting to the target(s) with varying distributional shifts while maintaining performance on the source. The proposed framework consists of two main stages: i) a SUDA model yielding cleaner target labels—favoring good performance on target, and ii) a novel method for synthesizing class-conditioned source-style images by leveraging only the source model and pseudo-labeled target data as a prior. An extensive pool of experiments on major benchmarks, e.g., PACS, Visda-C, and DomainNet demonstrates that the proposed Continual SUDA (C-SUDA) framework enables preserving satisfactory performance on the source domain without exploiting the source data at all.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Ahmed, W., Morerio, P., Murino, V.: Cleaning noisy labels by negative ensemble learning for source-free unsupervised domain adaptation. In: IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 1616–1625 (2022)
Bang, J., Kim, H., Yoo, Y., Ha, J.W., Choi, J.: Rainbow memory: continual learning with a memory of diverse samples. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8218–8227 (2021)
Belal, A., Kiran, M., Dolz, J., Blais-Morin, L.A., Granger, E., et al.: Knowledge distillation methods for efficient unsupervised adaptation across multiple domains. Image Vis. Comput. 108, 104096 (2021)
Bobu, A., Tzeng, E., Hoffman, J., Darrell, T.: Adapting to continuously shifting domains. In: ICLR (2018)
Carlucci, F.M., D’Innocente, A., Bucci, S., Caputo, B., Tommasi, T.: Domain generalization by solving jigsaw puzzles. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 2229–2238 (2019)
Chen, C., et al.: Progressive feature alignment for unsupervised domain adaptation. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 627–636 (2019)
Ge, Y., Chen, D., Li, H.: Mutual mean-teaching: pseudo label refinery for unsupervised domain adaptation on person re-identification. In: International Conference on Learning Representations (2020)
Gholami, B., Sahu, P., Rudovic, O., Bousmalis, K., Pavlovic, V.: Unsupervised multi-target domain adaptation: an information theoretic approach. IEEE Trans. Image Process. 29, 3993–4002 (2020)
Hu, S.X., et al.: Empirical Bayes transductive meta-learning with synthetic gradients. In: International Conference on Learning Representations (2020)
Kim, Y., Cho, D., Han, K., Panda, P., Hong, S.: Domain adaptation without source data. IEEE Trans. Artif. Intell. 2(6), 508–518 (2021). https://doi.org/10.1109/TAI.2021.3110179
Li, D., Hospedales, T.: Online meta-learning for multi-source and semi-supervised domain adaptation. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12361, pp. 382–403. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58517-4_23
Li, D., Yang, Y., Song, Y.Z., Hospedales, T.M.: Deeper, broader and artier domain generalization. In: IEEE International Conference on Computer Vision, pp. 5542–5550 (2017)
Li, R., Jiao, Q., Cao, W., Wong, H.S., Wu, S.: Model adaptation: unsupervised domain adaptation without source data. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9641–9650 (2020)
Li, Y., Yuan, L., Chen, Y., Wang, P., Vasconcelos, N.: Dynamic transfer for multi-source domain adaptation. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10998–11007 (2021)
Liang, J., Hu, D., Feng, J.: Do we really need to access the source data? Source hypothesis transfer for unsupervised domain adaptation. In: III, H.D., Singh, A. (eds.) 37th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 119, pp. 6028–6039. PMLR, 13–18 July 2020
Mahendran, A., Vedaldi, A.: Understanding deep image representations by inverting them. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2015
Mordvintsev, A., Olah, C., Tyka, M.: Inceptionism: going deeper into neural networks (2015)
Morerio, P., Volpi, R., Ragonesi, R., Murino, V.: Generative pseudo-label refinement for unsupervised domain adaptation. In: The IEEE Winter Conference on Applications of Computer Vision, pp. 3130–3139 (2020)
Pan, X., Zhan, X., Dai, B., Lin, D., Loy, C.C., Luo, P.: Exploiting deep generative prior for versatile image restoration and manipulation. IEEE Trans. Pattern Anal. Mach. Intell. 44(11), 7474–7489 (2021)
Peng, X., Bai, Q., Xia, X., Huang, Z., Saenko, K., Wang, B.: Moment matching for multi-source domain adaptation. In: IEEE International Conference on Computer Vision, pp. 1406–1415 (2019)
Peng, X., Usman, B., Kaushik, N., Wang, D., Hoffman, J., Saenko, K.: VisDA: a synthetic-to-real benchmark for visual domain adaptation. In: IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 2021–2026 (2018)
Santurkar, S., Ilyas, A., Tsipras, D., Engstrom, L., Tran, B., Madry, A.: Image synthesis with a single (robust) classifier. In: Wallach, H., Larochelle, H., Beygelzimer, A., d’Alché-Buc, F., Fox, E., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 32. Curran Associates, Inc. (2019)
Shi, Y., Yuan, L., Chen, Y., Feng, J.: Continual learning via bit-level information preserving. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 16674–16683 (2021)
Tang, H., Jia, K.: Discriminative adversarial domain adaptation. In: AAAI, pp. 5940–5947 (2020)
Torralba, A., Efros, A.A.: Unbiased look at dataset bias. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1521–1528 (2011)
Volpi, R., Larlus, D., Rogez, G.: Continual adaptation of visual representations via domain randomization and meta-learning. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4443–4453 (2021)
Volpi, R., Namkoong, H., Sener, O., Duchi, J., Murino, V., Savarese, S.: Generalizing to unseen domains via adversarial data augmentation. In: 32nd International Conference on Neural Information Processing Systems, pp. 5339–5349 (2018)
Xia, H., Zhao, H., Ding, Z.: Adaptive adversarial network for source-free domain adaptation. In: IEEE/CVF International Conference on Computer Vision (ICCV), pp. 9010–9019, October 2021
Xu, J., Xiao, L., López, A.M.: Self-supervised domain adaptation for computer vision tasks. IEEE Access 7, 156694–156706 (2019)
Yang, L., Balaji, Y., Lim, S.-N., Shrivastava, A.: Curriculum manager for source selection in multi-source domain adaptation. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12359, pp. 608–624. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58568-6_36
Yang, S., Wang, Y., van de Weijer, J., Herranz, L., Jui, S.: Generalized source-free domain adaptation. In: IEEE/CVF International Conference on Computer Vision, pp. 8978–8987 (2021)
Yin, H., et al.: Dreaming to distill: data-free knowledge transfer via DeepInversion. In: IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8715–8724 (2020)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Ahmed, W., Morerio, P., Murino, V. (2023). Continual Source-Free Unsupervised Domain Adaptation. In: Foresti, G.L., Fusiello, A., Hancock, E. (eds) Image Analysis and Processing – ICIAP 2023. ICIAP 2023. Lecture Notes in Computer Science, vol 14233. Springer, Cham. https://doi.org/10.1007/978-3-031-43148-7_2
Download citation
DOI: https://doi.org/10.1007/978-3-031-43148-7_2
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-43147-0
Online ISBN: 978-3-031-43148-7
eBook Packages: Computer ScienceComputer Science (R0)