[CVPR 2025] Explaining Domain Shifts in Language: Concept erasing for Interpretable Image Classification
Authors:
Zequn Zeng,
Yudi Su,
Jianqiao Sun,
Tiansheng Wen,
Hao Zhang,
Zhengjue Wang,
Bo Chen,
Hongwei Liu,
Jiawei Ma,
Official implementation of LanCE.
Concept-based models can map black-box representations to human-understandable concepts, which makes the decision-making process more transparent and then allows users to understand the reason behind predictions. However, domain-specific concepts often impact the final predictions, which subsequently undermine the model generalization capabilities, and prevent the model from being used in high-stake applications. In this paper, we propose a novel Language-guided Concept-Erasing (LanCE) framework. In particular, we empirically demonstrate that pre-trained vision-language models (VLMs) can approximate distinct visual domain shifts via domain descriptors while prompting large Language Models (LLMs) can easily simulate a wide range of descriptors of unseen visual domains. Then, we introduce a novel plug-in domain descriptor orthogonality (DDO) regularizer to mitigate the impact of these domain-specific concepts on the final predictions. Notably, the DDO regularizer is agnostic to the design of conceptbased models and we integrate it into several prevailing models. Through evaluation of domain generalization on four standard benchmarks and three newly introduced benchmarks, we demonstrate that DDO can significantly improve the out-of-distribution (OOD) generalization over the previous state-of-the-art concept-based models.
If you think LanCE is useful, please cite these papers!
@article{zeng2025explaining,
title={Explaining Domain Shifts in Language: Concept erasing for Interpretable Image Classification},
author={Zeng, Zequn and Su, Yudi and Sun, Jianqiao and Wen, Tiansheng and Zhang, Hao and Wang, Zhengjue and Chen, Bo and Liu, Hongwei and Ma, Jiawei},
journal={arXiv preprint arXiv:2503.18483},
year={2025}
}
@inproceedings{zeng2023conzic,
title={Conzic: Controllable zero-shot image captioning by sampling-based polishing},
author={Zeng, Zequn and Zhang, Hao and Lu, Ruiying and Wang, Dongsheng and Chen, Bo and Wang, Zhengjue},
booktitle={Proceedings of the IEEE/CVF conference on computer vision and pattern recognition},
pages={23465--23476},
year={2023}
}
Prepare the python environment:
pip install -r requirements.txt
In this paper, we propose three new datasets, AwA2-clipart, LADV-3D, and LADA-Sculpture. Besides, we also conduct experiments on some classic datasets like CUB-Painting. Different datasets can be downloaded via the following link. Please download corresponding datasets and put them into ./data .
Datasets | Download link | style |
---|---|---|
CUB | link | photo |
CUB-Painting | link | painting |
AwA2 | link | photo |
AwA2-clipart | link | clipart |
LADA | link | real |
LADA-Sculpture | link | Sculpture |
LADV | link | real |
LADV-3D | link | 3D model |
The data structure is as follows:
data
└── CUB
├── CUB_200_2011
│ ├── images
│ ├──
│ └──
├── CUB-200-Painting
│ ├── images
│ ├──
│ └──
└── ...
Train a CLIP CBM:
python main.py --dataset CUB --alpha 0 --class_avg_concept --CBM_type clip_cbm --wandb
Train a CLIP CBM + DDO loss:
python main.py --dataset CUB --alpha 1 --class_avg_concept --CBM_type clip_cbm --wandb
For CLIP zero-shot image classification.
python main_zeroshot.py --dataset CUB --class_avg_concept --prompt_type origin --wandb
This code is heavily depend on ConZIC, LADS and LaBO.
Thanks for their good work.