This repository is an implementation of TIMotion(CVPR 2025). If you find this project helpful, please cite our paper (BibTeX). Thank you!
conda create --name timotion
conda activate timotion
pip install -r requirements.txt
Download the data from webpage. And put them into ./data/.
<DATA-DIR>
./annots //Natural language annotations where each file consisting of three sentences.
./motions //Raw motion data standardized as SMPL which is similiar to AMASS.
./motions_processed //Processed motion data with joint positions and rotations (6D representation) of SMPL 22 joints kinematic structure.
./split //Train-val-test split.
Prepare the evaluation model
bash prepare/download_evaluation_model.sh
Download the checkpoints of TIMotion and TIMotion w.o LPA
Modify config files ./configs/model.yaml ./configs/datasets.yaml and ./configs/train.yaml, and then run:
python tools/train.py --LPA
Or you can train TIMotion without LPA as:
python tools/train.py --epoch 1500
Modify config files ./configs/model.yaml and ./configs/datasets.yaml
python tools/eval.py --LPA --pth ${CHECKPOINT}
If you find our work useful in your research, please consider citing:
@inproceedings
54D4
{wang2025timotion,
title={TIMotion: Temporal and Interactive Framework for Efficient Human-Human Motion Generation},
author={Wang, Yabiao and Wang, Shuo and Zhang, Jiangning and Fan, Ke and Wu, Jiafu and Xue, Zhucun and Liu, Yong},
booktitle={Proceedings of the Computer Vision and Pattern Recognition Conference},
pages={7169--7178},
year={2025}
}