MP2CDLO: Self-supervised Learning of Reconstructing Deformable Linear Objects under Single-Frame Occluded View
[MP2CDLO:Self-supervised Learning of Reconstructing Deformable Linear Objects under Single-Frame Occluded View]
Song Wang,Guanghui Shen,Shiru Wu and Dan Wu
[abstract] Deformable linear objects (DLOs),such as ropes, cables, and rods, are common in various scenarios, and accurate occlusion reconstructionof them are crucial for effective robotic manipulation. Previous studies for DLO reconstruction eitherrely on supervised learning, which is limited by the availability of labeled real-world data, orgeometric approaches, which fail to capture global features and often struggle with occlusions and complex shapes. This paper presentsa novel DLO occlusion reconstruction framework that integrates self-supervised point cloud completion with traditional techniques like clustering, sorting, and fitting to enerate ordered key points. A memory module is proposed to enhance the self-supervised training process by consolidating prototype information, while DLO shape constraints are utilized to improve reconstruction accuracy. Experimental results on both synthetic and real-world datasets demonstrate that our method outperforms state-of-the-art algorithms, particularly in scenarios involving complex occlusions and intricate self-intersections.
- 🔓 2025.02.08:The complete code has been released!
- 👏 2025.01.28: Great news! Our research has been successfully accepted by ICRA 2025. Looking forward to seeing you all in Atlanta! Thanks for all coauthors and contributors, The complete code is coming soon!
- 🙌 2024.09.15: The first version of code and dataset for MP2CDLO are released. The complete code for this project will be made publicly available upon acceptance of the paper. Now you can run our demo code to see the results.
- 🚀 2024.09.13: This work has been submitted to ICRA 2025 and is currently under review. For further information, please visit our project website: MP2CDLO.
This code was tested on Ubuntu 20.04
.
- Python >= 3.7 (tested on 3.9)
- CUDA 11.7
- Pytorch >= 1.12
- open3d>=0.14.1
- transforms3d
- pytorch3d
- pyyaml
- opencv-python
- tensorboard
- tqdm
- timm==0.4.5
- scipy
- torch_kmeans
- geomdl
- bezier
- scikit-learn
- Create conda environments and active it.
conda create -n MP2CDLO python=3.9
conda activate MP2CDLO
- Install
torch1.13.0+cu117
.
pip install torch==1.13.0+cu117 torchvision==0.14.0+cu117 torchaudio==0.13.0 --extra-index-url https://download.pytorch.org/whl/cu117
- Download the package of pytorch3d and install.
https://anaconda.org/pytorch3d/pytorch3d/files
conda install https://anaconda.org/pytorch3d/pytorch3d/0.7.5/download/linux-64/pytorch3d-0.7.5-py39_cu117_pyt1130.tar.bz2
- Install the other packages using requirements.
pip install -r requirements.txt
- Pytorch Chamfer Distance
- pointops_cuda
To build this, run the command below:
python setup.py install --user
Remember to delete the previous build files from other configurations! Compile from scratch in your own environment.
-
Rope dataset(generated by Issac sim Replicator) [
Google Drive
] -
Please download the dataset to
./data/EPN3D_rope/
The layout should look like this
├── cfgs
├── data [This is your dataroot]
│ ├── rope_dict.json
│ ├── EPN3D_rope
│ │ ├── EPN3D.json
│ │ ├── rope
│ │ │ ├── complete
│ │ │ │ ├── label_0000.npy
│ │ │ │ ├── ......
│ │ │ ├── partial
│ │ │ │ ├── pointcloud_0000.npy
│ │ │ │ ├── ......
To train our self-supervised DLO Point cloud completion model from scratch, run:
python train.py --config ./cfgs/EPN3D_models/MP2CDLO.yaml --exp_name your_exp_name
We provide pre-trained model weights on the real-world data.You can directly with the following code.
python ./demo/src/demo.py
This code is standing on the shoulders of giants.
We want to thank the following contributors that our code is based on:Partial2Complete, Dloftbs and PoinTr.
We would like to extend our gratitude to Kangchen Lv, Zhaole Sun and Ruikai Cui for their invaluable guidance and contributions during the early stages of this project. Their expertise and assistance have been instrumental in driving the progress of this work.