Jingkai Wang, Jue Gong, Lin Zhang, Zheng Chen, Xing Liu, Hong Gu, Yutong Liu, Yulun Zhang, and Xiaokang Yang, "One-Step Diffusion Model for Face Restoration", 2024
- 2025-04-23: Results are released. (Synthetic dataset: CelebA-Test; Real-world datasets: Wider-Test, LFW-Test, and WebPhoto-Test)
- 2025-02-27: Congratulations! OSDFace is accepted to CVPR 2025.
- 2024-11-25: This repo is released.
Abstract: Diffusion models have demonstrated impressive performance in face restoration. Yet, their multi-step inference process remains computationally intensive, limiting their applicability in real-world scenarios. Moreover, existing methods often struggle to generate face images that are harmonious, realistic, and consistent with the subject’s identity. In this work, we propose OSDFace, a novel one-step diffusion model for face restoration. Specifically, we propose a visual representation embedder (VRE) to better capture prior information and understand the input face. In VRE, low-quality faces are processed by a visual tokenizer and subsequently embedded with a vector-quantized dictionary to generate visual prompts. Additionally, we incorporate a facial identity loss derived from face recognition to further ensure identity consistency. We further employ a generative adversarial network (GAN) as a guidance model to encourage distribution alignment between the restored face and the ground truth. Experimental results demonstrate that OSDFace surpasses current state-of-the-art (SOTA) methods in both visual quality and quantitative metrics, generating high-fidelity, natural face images with high identity consistency.
- Release code and pretrained models
- Datasets
- Models
- Testing
- Training
- Results
- Citation
- Acknowledgements
Training Dataset:
We use FFHQ and FFHQR for training. All of them are resized to 512x512.
Testing Dataset:
We use CelebA-Test as the synthetic dataset. The download link can refer to DAEFR.
We use Wider-Test, LFW-Test, and WebPhoto-Test as the real-world datasets. The download links can refer to DAEFR and VQFR.
Thanks a lot for their contribution to Face Restoration.
We achieved state-of-the-art performance on synthetic and real-world datasets. Detailed results can be found in the paper, and the full results could be downloaded here: Google Drive or OneDrive.
Quantitative Comparisons (click to expand)
Visual Comparisons (click to expand)
More Comparisons on Synthetic Dataset...
More Comparisons on Real-World Dataset...
If you find the code helpful in your research or work, please cite the following paper(s).
@InProceedings{wang2025osdface,
author = {Wang, Jingkai and Gong, Jue and Zhang, Lin and Chen, Zheng and Liu, Xing and Gu, Hong and Liu, Yutong and Zhang, Yulun and Yang, Xiaokang},
title = {{OSDFace}: One-Step Diffusion Model for Face Restoration},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
month = {June},
year = {2025},
pages = {12626-12636}
}
[TBD]