NaviBridger is a novel framework for visual navigation built upon Denoising Diffusion Bridge Models (DDBMs). Unlike traditional diffusion policies that start from Gaussian noise, NaviBridger leverages prior actions (rule-based or learned) to guide the denoising process, accelerating convergence and improving trajectory accuracy.
- 🔧 DDBM-based policy generation from arbitrary priors
- 🔁 Unified framework supporting Gaussian, rule-based, and learning-based priors
- 🏃♂️ Real-world deployment support on mobile robots (e.g., Diablo + Jetson Orin AGX)
- Deployment code updates
- A refactored version of the code (in the coming weeks)
navibridge/
├── train/ # Training code and dataset processing
│ ├── vint_train/ # NaviBridger models, configs, and datasets
│ ├── train.py # Training entry point
│ ├── process_*.py # Data preprocessing scripts
│ └── train_environment.yml # Conda setup for training
├── deployment/ # Inference and deployment
│ ├── src/navibridger_inference.py
│ ├── config/params.yaml # Inference config
│ ├── deployment_environment.yaml
│ └── model_weights/ # Place for .pth model weights
└── README.md # This file
conda env create -f train/train_environment.yml
conda activate navibridge_train
pip install -e train/
git clone git@github.com:real-stanford/diffusion_policy.git
pip install -e diffusion_policy/
conda env create -f deployment/deployment_environment.yaml
conda activate navibridge
pip install -e train/
pip install -e diffusion_policy/
-
Download public datasets:
-
Process datasets:
python train/process_recon.py # or process_bags.py python train/data_split.py --dataset <your_dataset_path>
-
Expected format:
dataset_name/
├── traj1/
│ ├── 0.jpg ... T_1.jpg
│ └── traj_data.pkl
└── ...
After data_split.py
, you should have:
train/vint_train/data/data_splits/
└── <dataset_name>/
├── train/traj_names.txt
└── test/traj_names.txt
cd train/
python train.py -c config/navibridge.yaml # Select the training type by changing prior_policy
For learning-based method, training CVAE first:
python train.py -c config/cvae.yaml
- Place your trained model and config in:
deployment/model_weights/*.pth
deployment/model_weights/*.yaml
-
Adjust model path
deplyment/config/models.yaml
-
Prepare input images (minimum 4):
0.png
,1.png
, etc.
Adjust input directory path indeployment/config/params.yaml
. -
Run:
python deployment/src/navibridger_inference.py --model navibridge_cvae # Model name corresponding to key value in deplyment/config/models.yaml
Here is our deployment platform information, you can replace it at will.
- NVIDIA Jetson Orin AGX
- Intel RealSense D435i
- Diablo wheeled-legged robot
📸 RGB-only input, no depth or LiDAR required.
@inproceedings{ren2025prior,
title={Prior Does Matter: Visual Navigation via Denoising Diffusion Bridge Models},
author={Ren, Hao and Zeng, Yiming and Bi, Zetong and Wan, Zhaoliang and Huang, Junlong and Cheng, Hui},
booktitle={IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2025}
}
This codebase is released under the MIT License.
NaviBridger is inspired by the contributions of the following works to the open-source community: DDBM, NoMaD, and BRIDGER. We thank the authors for sharing their outstanding work.