MACVO_Office.mp4
Note
We plan to release TensorRT accelerated implementation and adapting more matching networks for MAC-VO. If you are interested, please star β this repo to stay tuned.
Note
We provide documentation for extending MAC-VO for extending MAC-VO or using this repository as a boilerplate for your learning-based Visual Odometry.
- [Apr 2025] Our work is nominated as the ICRA 2025 Best Paper Award Finalist (top 1%)! Keep an eye on our presentation on May 20, 16:35-16:40 Room 302. We also plan to provide a real-world demo at the conference.
- [Mar 2025] We boost the performance of MAC-VO with a new backend optimizer, the MAC-VO now also supports dense mapping without any additional computation.
- [Jan 2025] Our work is accepted by the IEEE International Conference on Robotics and Automation (ICRA) 2025. We will present our work at ICRA 2025 in Atlanta, Georgia, USA.
- [Nov 2024] We released the ROS-2 integration at https://github.com/MAC-VO/MAC-VO-ROS2 along with the documentation at https://mac-vo.github.io/wiki/ROS/
Clone the repository using the following command to include all submodules automatically.
git clone -b dev/fixgit https://github.com/MAC-VO/MAC-VO.git --recursive
-
Docker Image
$ docker build --network=host -t macvo:latest -f Docker/Dockerfile .
-
Virtual Environment
You can setup the dependencies in your native system. MAC-VO codebase can only run on Python 3.10+. See
requirements.txt
for environment requirements.How to adapt MAC-VO codebase to Python < 3.10?
The Python version requirement we required is mostly due to the
match
syntax used and the type annotations.The
match
syntax can be easily replaced withif ... elif ... else
while the type annotations can be simply removed as it does not interfere runtime behavior.
All pretrained models for MAC-VO, stereo TartanVO and DPVO are in our release page. Please create a new folder Model
in the root directory and put the pretrained models in the folder.
$ mkdir Model
$ wget -O Model/MACVO_FrontendCov.pth https://github.com/MAC-VO/MAC-VO/releases/download/model/MACVO_FrontendCov.pth
$ wget -O Model/MACVO_posenet.pkl https://github.com/MAC-VO/MAC-VO/releases/download/model/MACVO_posenet.pkl
Test MAC-VO immediately using the provided demo sequence. The demo sequence is a selected from the TartanAir v2 dataset.
- Download a demo sequence through Google Drive.
- Download pre-trained model for frontend model and posenet.
To run the Docker:
$ docker run --gpus all -it --rm -v [DATA_PATH]:/data -v [CODE_PATH]:/home/macvo/workspace macvo:latest
To run the Docker with visualization:
$ xhost +local:docker; docker run --gpus all -it --rm -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix -v [DATA_PATH]:/data -v [CODE_PATH]:/home/macvo/workspace macvo:latest
We will use Config/Experiment/MACVO/MACVO_example.yaml
as the configuration file for MAC-VO.
-
Change the
root
in the data config file 'Config/Sequence/TartanAir_example.yaml' to reflect the actual path to the demo sequence downloaded. -
Run with the following command
$ cd workspace $ python3 MACVO.py --odom Config/Experiment/MACVO/MACVO_example.yaml --data Config/Sequence/TartanAir_example.yaml
Note
See python MACVO.py --help
for more flags and configurations.
Every run will produce a Sandbox
(or Space
). A Sandbox
is a storage unit that contains all the results and meta-information of an experiment. The evaluation and plotting script usually requires one or more paths of sandbox(es).
Calculate the absolute translate error (ATE, m); relative translation error (RTE, m/frame); relative orientation error (ROE, deg/frame); relative pose error (per frame on se(3)).
$ python -m Evaluation.EvalSeq --spaces SPACE_0, [SPACE, ...]
Plot sequences, translation, translation error, rotation and rotation error.
$ python -m Evaluation.PlotSeq --spaces SPACE_0, [SPACE, ...]
-
Run MAC-VO (Ours method) on a Single Sequence
$ python MACVO.py --odom ./Config/Experiment/MACVO/MACVO.yaml --data ./Config/Sequence/TartanAir_abandonfac_001.yaml
-
Run MAC-VO for Ablation Studies
$ python MACVO.py --odom ./Config/Experiment/MACVO/Ablation_Study/[CHOOSE_ONE_CFG].yaml --data ./Config/Sequence/TartanAir_abandonfac_001.yaml --useRR
-
Run MAC-VO on Test Dataset
$ python -m Scripts.Experiment.Experiment_MACVO --odom [PATH_TO_ODOM_CONFIG]
-
Run MAC-VO Mapping Mode
Mapping mode only reprojects pixels to 3D space and does not optimize the pose. To run the mapping mode, you need to first run a trajectory through the original mode (MAC-VO), and pass the resulting pose file to MAC-VO mapping mode by modifying the config. (Specifically,
motion > args > pose_file
in config file)$ python MACVO.py --odom ./Config/Experiment/MACVO/MACVO_MappingMode.yaml --data ./Config/Sequence/TartanAir_abandonfac_001.yaml
We used the Rerun visualizer to visualize 3D space including camera pose, point cloud and trajectory.
-
Create Rerun Recording for Runs
$ python -m Scripts.AdHoc.DemoCompare --macvo_space [MACVO_RESULT_PATH] --other_spaces [RESULT_PATH, ...] --other_types [{DROID-SLAM, DPVO, TartanVO}, ...]
-
Create Rerun Visualization for Map
Create a
tensor_map_vis.rrd
file in each sandbox that stores the visualization of 3D point cloud map.$ python -m Scripts.AdHoc.DemoCompare --spaces [RESULT_PATH, ...] --recursive?
-
Create Rerun Visualization for a Single Run (Eye-catcher figure for our paper)
$ python -m Scripts.AdHoc.DemoSequence --space [RESULT_PATH] --data [DATA_CONFIG_PATH]
We also integrated two baseline methods (DPVO, TartanVO Stereo) into the codebase for evaluation, visualization and comparison.
Expand All (2 commands)
-
Run DPVO on Test Dataset
$ python -m Scripts.Experiment.Experiment_DPVO --odom ./Config/Experiment/Baseline/DPVO/DPVO.yaml
-
Run TartanVO (Stereo) on Test Dataset
$ python -m Scripts.Experiment.Experiment_TartanVO --odom ./Config/Experiment/Baseline/TartanVO/TartanVOStereo.yaml
PyTorch Tensor Data - All images are stored in BxCxHxW
format following the convention. Batch dimension is always the first dimension of tensor.
Pixels on Camera Plane - All pixel coordinates are stored in uv
format following the OpenCV convention, where the direction of uv are "east-down". Note that this requires us to access PyTorch tensor in data[..., v, u]
indexing.
World Coordinate - NED
convention, +x -> North
, +y -> East
, +z -> Down
with the first frame being world origin having identity SE3 pose.
This codebase is designed with modularization in mind so it's easy to modify, replace, and re-configure modules of MAC-VO. One can easily use or replase the provided modules like flow estimator, depth estimator, keypoint selector, etc. to create a new visual odometry.
We welcome everyone to extend and redevelop the MAC-VO. For documentation please visit the Documentation Site