Real-Time Spatial Reasoning by Mobile Robots for Reconstruction and Navigation in Dynamic LiDAR Scenes
This repository is the official repository of the paper, Real-Time Spatial Reasoning by Mobile Robots for Reconstruction and Navigation in Dynamic LiDAR Scenes.
Pengdi Huang, Mingyang Wang, Huan Tian, Minglun Gong, Hao (Richard) Zhang, Hui Huang
VCC, CSSE, Shenzhen University
Our accompanying videos are now available on YouTube (click below images to open).
We provide data for seven scenarios, which you can download with Google Drive.
Our project is based on ROS Noetic. Please install ROS first on Ubuntu 20.04 Follow ROS Installation.
(1) Install Ceres Solver, refer to the official Ceres Solver documentation (we recommended version: 1.14.0).
▶️ Install Ceres Solver with bash
git clone --branch ceres-solver-1.14.0 --single-branch https://github.com/LogicT5/Tools.git ceres-solver-1.14.0 # git ceres-solver1.14.0
sudo apt-get install cmake # CMake
sudo apt-get install libgoogle-glog-dev libgflags-dev # google-glog + gflags
sudo apt-get install libatlas-base-dev # Use ATLAS for BLAS & LAPACK
sudo apt-get install libeigen3-dev # Eigen3
sudo apt-get install libsuitesparse-dev # SuiteSparse (optional)
# install
tar -zxvf ceres-solver-1.14.0.tar.gz -C ./ceres && cd ceres
# build
mkdir build && cd build
cmake ..
make -j4
make test
sudo make install
(2) Install PCL, refer to the PCL.
▶️ Install PCL with bash
sudo apt-get install libpcl-dev
(3) Install CGAL, refer to the CGAL.
▶️ Install PCL with bash
sudo apt-get install libcgal-dev
(4) Install Embree, refer to Intel Embree.
First, Init workspace and Clone the repository:
mkdir ~/catkin_ws && cd catkin_ws
git clone https://github.com/SZU-VCC/RTRecon.git src #Clone the repository and rename it to src
Second, The system requires a SLAM method to register the point cloud, such as ALOAM( Download ALOAM into the ~/catkin_ws/src ).
Finally, build the project in the workspace(catkin_ws):
catkin_make -DCMAKE_BUILD_TYPE=Release
source ~/catkin_ws/devel/setup.bash
Start the following nodes in sequence. The simple_frame
is used for reconstruct single-frame scan, hash_fusion
is for marking free space from a single frame, fusing the LoS distance field between multiple frames, and detecting and removing moving objects. fusion_recon
is used for multi-frame reconstruction.
- Start the SLAM node, example as ALOM:
roslaunch aloam_velodyne aloam_velodyne.launch
- Start the
simple_frame
node:
roslaunch simple_frame reconstruction.launch
- Start the
hash_fusion
node:
roslaunch hash_fusion hash_fusion.launch
- Start the
fusion_recon
node:
roslaunch fusion_recon fusion_recon.launch
- Start playing a bag:
rosbag play /path/to/YOURBAG.bag
❗ Tips
If your system fails to operate as expected, please proceed with the following diagnostic steps:
- Inspect the TF tree and verify that the world frame is set to either
/odom
or/map
; You can use the following command to unify the frames:
rosrun tf static_transform_publisher 0 0 0 0 0 0 /camera_init /odom 10
rosrun tf static_transform_publisher 0 0 0 0 0 0 /map /odom 10
- Ensure that the topic names specified in
reconstruction.launch
align with those published by the SLAM node.
If you find our work useful for your research, please consider citing the following papers:
@misc{huang2025realtimespatialreasoningmobile,
title={Real-Time Spatial Reasoning by Mobile Robots for Reconstruction and Navigation in Dynamic LiDAR Scenes},
author={Pengdi Huang and Mingyang Wang and Huan Tian and Minglun Gong and Hao Zhang and Hui Huang},
year={2025},
eprint={2505.12267},
archivePrefix={arXiv},
primaryClass={cs.RO},
url={https://arxiv.org/abs/2505.12267},
}
This repo is currently maintained by Huan Tian and is for academic research use only. Discussions and questions are welcome via huantian55@gmail.com.