8000 GitHub - anurag-198/WDASS
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

anurag-198/WDASS

Repository files navigation

Weakly-Supervised Domain Adaptive Semantic Segmentation with Prototypical Contrastive Learning

Framework

This is the official repository accompanying the CVPR paper:

Anurag Das, Yongqin Xian, Dengxin Dai, and Bernt Schiele. Weakly-Supervised Domain Adaptive Semantic Segmentation with Prototypical Contrastive Learning. CVPR 2023.

Paper | Video | Supplemental

Usage

For Conda:

Create a conda environment using the provided environment.yml file:

conda env create -f environment.yml
conda activate cu11

For Pip:

Create a virtual environment using the provided requirements.txt file:

python3 -m venv cu11
source cu11/bin/activate
pip install -r requirements.txt

Experiment setup:

Preparing the data:

  1. The dataset supported are Cityscapes, GTA5 and Synthia. Download the Cityscapes dataset from here. The dataset directory should have the following structure:
data
 SYNTHIA/
 GTA5/
 cityscapes/
 ├── leftImg8bit_trainextra/
 │   └── leftImg8bit/
 │    └── train/
 │       ├── [subfolder_1]/
 │       │   └── [image_files]
 │       ├── [subfolder_2]/
 │       │   └── [image_files]
 │       └── ...
 ├── gtFine_trainextra/
 │   └── gtFine/
 │    └── train/
 │       ├── [subfolder_1]/
 │       │   └── [mask_files]
 │       ├── [subfolder_2]/
 │       │   └── [mask_files]
 │       └── ...
 ├── leftImg8bit_trainvaltest/
 │   └── leftImg8bit/
 │    └── train/
 │    └── val/
 │    └── train_extra/
 ├── gtFine_trainvaltest/
 │   └── gtFine/
 │    └── train/
 │    └── val/
 │    └── train_extra/
 ├── gtCoarse/
 │    └── train/
 │    └── val/
 │    └── train_extra/
  1. change the __C.ASSETS_PATH in config.py to the path of the dataset directory.

  2. (Optional) For using the point annotation --weak_label point during training, first generate the ground truth point annotations using the following command:

python datasets/utils.py

Training:

  1. Train segmentation model on synthetic data (GTA5) to obtain GTA pretrained weights
  2. Use GTA5 pretrained model to start training for adaptation using below -
python train.py --dataset cityscapes --result_dir logs/ --multiprocessing_distributed  --use_contrast  --bn_buffer --weak_label coarse --use_wl --imloss --improto --resume "pretrained gta weights"
python train.py --dataset cityscapes --result_dir logs/ --multiprocessing_distributed  --use_contrast  --bn_buffer --weak_label point --use_wl --imloss --improto --resume "pretrained gta weights"
python train.py --dataset cityscapes --result_dir logs/ --multiprocessing_distributed  --use_contrast  --bn_buffer --weak_label image --imloss --improto --resume "pretrained gta weights"

Validation:

You can use the following command:

python train.py --dataset cityscapes --eval val --n_scales 0.5,1,2 --arch deepv2.DeepV2R101 --result_dir  logs/coarse --multiprocessing_distributed --snapshot 'location to checkpoint'

Trained weights for validation

You can download trained weights for coarse, point and image labels from here

Citation

If you find our work useful, please consider citing our paper:

@InProceedings{Das_2023_CVPR,
    author    = {Das, Anurag and Xian, Yongqin and Dai, Dengxin and Schiele, Bernt},
    title     = {Weakly-Supervised Domain Adaptive Semantic Segmentation With Prototypical Contrastive Learning},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2023},
    pages     = {15434-15443}
}

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

0