A curated list of resources including papers, datasets, and relevant links pertaining to spatial transformation for image composition, which aims to adjust the view/pose of the inserted foreground object in a composite image via simple spatial transformation (e.g., TPS transformation, perspective transformation). For more complete resources on general image composition (object insertion), please refer to Awesome-Image-Composition.
Contributions are welcome. If you wish to contribute, feel free to send a pull request. If you have suggestions for new sections to be included, please raise an issue and discuss before sending a pull request.
- Junhong Gou, Bo Zhang, Li Niu, Jianfu Zhang, Jianlou Si, Chen Qian, Liqing Zhang: "Virtual Accessory Try-On via Keypoint Hallucination." arXiv preprint arXiv:2310.17131 (2023) [arXiv]
- Bo Zhang, Yue Liu, Kaixin Lu, Li Niu, Liqing Zhang: "Spatial Transformation for Image Composition via Correspondence Learning." arXiv preprint arXiv:2207.02398 (2022) [arXiv]
- Fangneng Zhan, Hongyuan Zhu, Shijian Lu: "Spatial Fusion GAN for Image Synthesis." CVPR (2019) [pdf]
- Chen-Hsuan Lin, Ersin Yumer, Oliver Wang, Eli Shechtman, Simon Lucey: "ST-GAN: Spatial Transformer Generative Adversarial Networks for Image Compositing." CVPR (2018) [pdf] [code]