- 2024.07.4 FSD-BEV is accepted by ECCV 2024. The paper is available here.
Config | mAP | NDS | Baidu | |
---|---|---|---|---|
FSD-BEV-R50-CBGS | 40.3 | 52.6 | link | link |
FSD-BEV-R101-CBGS | 48.8 | 58.9 | link | link |
a. Create a conda virtual environment and activate it.
conda create -n fsdbev python=3.8 -y
conda activate fsdbev
b. Install PyTorch and torchvision following the official instructions.
pip install torch==1.10.0+cu111 torchvision==0.11.0+cu111 torchaudio==0.10.0 -f https://download.pytorch.org/whl/torch_stable.html
c. Install FSD-BEV as mmdet3d.
pip install mmcv-full==1.5.3
pip install mmdet==2.25.1
pip install mmsegmentation==0.25.0
pip install -e .
2. Prepare nuScenes dataset as introduced in nuscenes_det.md and create the pkl for FSD-BEV by running:
python tools/create_data_bevdet.py
3. Download nuScenes-lidarseg from nuScenes official site and put it under data/nuscenes/. Create data after Frame Combination processing by running:
python tools/generate_depth_multi.py
bash tools/dist_train.sh configs/fsdbev/fsdbev-r50-cbgs.py 8
bash tools/dist_test.sh configs/fsdbev/fsdbev-r50-cbgs.py $CHECKPOINT 8 --eval bbox
This project is not possible without multiple great open-sourced code bases. We list some notable examples below.
If FSD-BEV is helpful for your research, please consider citing the following BibTeX entry.
@inproceedings{jiang2025fsd,
title={FSD-BEV: Foreground Self-Distillation for Multi-view 3D Object Detection},
author={Jiang, Zheng and Zhang, Jinqing and Zhang, Yanan and Liu, Qingjie and Hu, Zhenghui and Wang, Baohui and Wang, Yunhong},
booktitle={European Conference on Computer Vision},
pages={110--126},
year={2025},
organization={Springer}
}