8000 GitHub - gobunu/HAODiff
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

gobunu/HAODiff

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 

Repository files navigation

HAODiff: Human-Aware One-Step Diffusion
via Dual-Prompt Guidance

Jue Gong, Tingyu Yang, Jingkai Wang, Zheng Chen, Xing Liu, Hong Gu, Yulun Zhang, Xiaokang Yang

"A novel one-step diffusion model for human body restoration, efficiently handling human motion blur and generic noise in human images.", 2025

🔥🔥🔥 News

  • 2025-05-27: This repo is released.

Abstract: Human-centered images often suffer from severe generic degradation during transmission and are prone to human motion blur (HMB), making restoration challenging. Existing research lacks sufficient focus on these issues, as both problems often coexist in practice. To address this, we design a degradation pipeline that simulates the coexistence of HMB and generic noise, generating synthetic degraded data to train our proposed HAODiff, a human-aware one-step diffusion. Specifically, we propose a triple-branch dual-prompt guidance (DPG), which leverages high-quality images, residual noise (LQ minus HQ), and HMB segmentation masks as training targets. It produces a positive-negative prompt pair for classifier-free guidance (CFG) in a single diffusion step. The resulting adaptive dual prompts let HAODiff exploit CFG more effectively, boosting robustness against diverse degradations. For fair evaluation, we introduce MPII-Test, a benchmark rich in combined noise and HMB cases. Extensive experiments show that our HAODiff surpasses existing state-of-the-art (SOTA) methods in terms of both quantitative metrics and visual quality on synthetic and real-world datasets, including our introduced MPII-Test.

HAODiff pipeline
Figure 2: Degradation pipeline overview.

HAODiff model architecture
Figure 3: Model structure of our HAODiff.




PERSONA-Val


MPII-Test


PERSONA-Test

⚒️ TODO

  • Release code and pretrained models

🔗 Contents

🔎 Results

The model HAODiff achieved state-of-the-art performance on both the datasets PERSONA-Val, PERSONA-Test, and MPII-Test. Detailed results can be found in the paper.

 Quantitative Comparisons (click to expand)
  • Results in Table 1 on synthetic PERSONA-Val dataset from the main paper.

  •  Quantitative Comparisons (click to expand)
  • Results in Table 2 on real-world PERSONA-Test and MPII-Test datasets from the main paper.

  •  Visual Comparisons (click to expand)
  • Results in Figure 5 on synthetic PERSONA-Val dataset from the main paper.

  • Results in Figure 6 on real-world PERSONA-Test and MPII-Test datasets from the main paper.

  •  More Comparisons on fabric patterns and textures...
  • Results in Figure 4 from supplemental material.

  •  More Comparisons on synthetic PERSONA-Val dataset...
  • Results in Figure 5, 6 from supplemental material.

  •  More Comparisons on real-world PERSONA-Test dataset...
  • Results in Figure 7, 8 from supplemental material.

  •  More Comparisons on real-world MPII-Test dataset...
  • Results in Figure 9, 10 from supplemental material.

  •  More Comparisons on challenge tasks...
  • Results in Figure 11, 12 from supplemental material.

  • 📎 Citation

    If you find the code helpful in your research or work, please cite the following paper(s).

    @article{gong2025haodiff,
        title={{HAODiff: Human-Aware One-Step Diffusion via Dual-Prompt Guidance}},
        author={Gong, Jue and Yang, Tingyu and Wang, Jingkai and Chen, Zheng and Liu, Xing and Gu, Hong and Liu, Yutong and Zhang, Yulun and Yang, Xiaokang},
        journal={arXiv preprint 2505.19742},
        year={2025}
    }
    

    💡 Acknowledgements

    [TBD]

    About

    No description, website, or topics provided.

    Resources

    Stars

    Watchers

    Forks

    Packages

    No packages published

    Contributors 3

    •  
    •  
    •  
    0