[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3641519.3657458acmconferencesArticle/Chapter ViewAbstractPublication PagessiggraphConference Proceedingsconference-collections
research-article
Open access

Diffusion Texture Painting

Published: 13 July 2024 Publication History

Abstract

We present a technique that leverages 2D generative diffusion models (DMs) for interactive texture painting on the surface of 3D meshes. Unlike existing texture painting systems, our method allows artists to paint with any complex image texture, and in contrast with traditional texture synthesis, our brush not only generates seamless strokes in real-time, but can inpaint realistic transitions between different textures. To enable this application, we present a stamp-based method that applies an adapted pre-trained DM to inpaint patches in local render space, which is then projected into the texture image, allowing artists control over brush stroke shape and texture orientation. We further present a way to adapt the inference of a pre-trained DM to ensure stable texture brush identity, while allowing the DM to hallucinate infinite variations of the source texture. Our method is the first to use DMs for interactive texture painting, and we hope it will inspire work on applying generative models to highly interactive artist-driven workflows. Code and data for this paper are at github.com/nv-tlabs/DiffusionTexturePainting.

Supplemental Material

MP4 File
Paper Overview and Demo Video
MP4 File - presentation
presentation
PDF File
Supplemental material

References

[1]
Bart Adams, Martin Wicke, Philip Dutré, Markus H Gross, Mark Pauly, and Matthias Teschner. 2004. Interactive 3D Painting on Point-Sampled Objects. In PBG. 57–66.
[2]
Adobe. 2023. "Adobe Substance 3D Painter". https://substance3d.adobe.com/documentation/spdoc/paint-brush-34275374.html.
[3]
Aibek Alanov, Max Kochurov, Denis Volkhonskiy, Daniil Yashkov, Evgeny Burnaev, and Dmitry Vetrov. 2019. User-controllable multi-texture synthesis with generative adversarial networks. arXiv preprint arXiv:1904.04751 (2019).
[4]
Ryoichi Ando and Reiji Tsuruno. 2010. Segmental Brush Synthesis with Stroke Images. In Eurographics (Short Papers). 89–92.
[5]
Yogesh Balaji, Seungjun Nah, Xun Huang, Arash Vahdat, Jiaming Song, Karsten Kreis, Miika Aittala, Timo Aila, Samuli Laine, Bryan Catanzaro, 2022. ediffi: Text-to-image diffusion models with an ensemble of expert denoisers. arXiv preprint arXiv:2211.01324 (2022).
[6]
Omer Bar-Tal, Lior Yariv, Yaron Lipman, and Tali Dekel. 2023. Multidiffusion: Fusing diffusion paths for controlled image generation. (2023).
[7]
Connelly Barnes, Eli Shechtman, Adam Finkelstein, and Dan B Goldman. 2009. PatchMatch: A Randomized Correspondence Algorithm for Structural Image Editing. ACM Transactions on Graphics (Proc. SIGGRAPH) 28, 3 (Aug. 2009).
[8]
Connelly Barnes, Eli Shechtman, Dan B Goldman, and Adam Finkelstein. 2010. The generalized patchmatch correspondence algorithm. In Computer Vision–ECCV 2010: 11th European Conference on Computer Vision, Heraklion, Crete, Greece, September 5-11, 2010, Proceedings, Part III 11. Springer, 29–43.
[9]
William Baxter and Naga Govindaraju. 2010. Simple data-driven modeling of brushes. In Proceedings of the 2010 ACM SIGGRAPH Symposium on interactive 3D Graphics and Games. 135–142.
[10]
William Baxter, Jeremy Wendt, and Ming C Lin. 2004. IMPaSTo: A realistic, interactive model for paint. In Proceedings of the 3rd international symposium on Non-photorealistic animation and rendering. 45–148.
[11]
Urs Bergmann, Nikolay Jetchev, and Roland Vollgraf. 2017. Learning texture manifolds with the periodic spatial GAN. arXiv preprint arXiv:1705.06566 (2017).
[12]
Brent Burley and Dylan Lacewell. 2008. Ptex: Per-face texture mapping for production rendering. In Computer Graphics Forum, Vol. 27. Wiley Online Library, 1155–1164.
[13]
Brent Burley and Walt Disney Animation Studios. 2012. Physically-based shading at disney. In ACM SIGGRAPH, Vol. 2012. ACM New York, NY, USA, 1–7.
[14]
Marcel Campen, Hanxiao Shen, Jiaran Zhou, and Denis Zorin. 2019. Seamless parametrization with arbitrary cones for arbitrary genus. ACM Transactions on Graphics (TOG) 39, 1 (2019), 1–19.
[15]
Tianshi Cao, Karsten Kreis, Sanja Fidler, Nicholas Sharp, and Kangxue Yin. 2023. Texfusion: Synthesizing 3d textures with text-guided image diffusion models. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 4169–4181.
[16]
Nathan A Carr and John C Hart. 2004. Painting detail. In ACM SIGGRAPH 2004 Papers. 845–852.
[17]
Pierrick Chatillon, Yann Gousseau, and Sidonie Lefebvre. 2023. A geometrically aware auto-encoder for multi-texture synthesis. In International Conference on Scale Space and Variational Methods in Computer Vision. Springer, 263–275.
[18]
Zhili Chen, Byungmoon Kim, Daichi Ito, and Huamin Wang. 2015. Wetbrush: GPU-based 3D painting simulation at the bristle level. ACM Transactions on Graphics (TOG) 34, 6 (2015), 1–11.
[19]
Nelson Chu, William Baxter, Li-Yi Wei, and Naga Govindaraju. 2010. Detail-preserving paint modeling for 3D brushes. NPAR Symposium on Non-Photorealistic Animation and Rendering, 27–34. https://doi.org/10.1145/1809939.1809943
[20]
Nelson S-H Chu and Chiew-Lan Tai. 2005. Moxi: real-time ink dispersion in absorbent paper. ACM Transactions on Graphics (TOG) 24, 3 (2005), 504–511.
[21]
M. Cimpoi, S. Maji, I. Kokkinos, S. Mohamed, and A. Vedaldi. 2014. Describing Textures in the Wild. In Proceedings of the IEEE Conf. on Computer Vision and Pattern Recognition (CVPR).
[22]
CLIP-Interrogator. 2023. CLIP Interrogator. https://huggingface.co/spaces/pharmapsychotic/CLIP-Interrogator.
[23]
Cassidy J Curtis, Sean E Anderson, Joshua E Seims, Kurt W Fleischer, and David H Salesin. 1997. Computer-generated watercolor. In Proceedings of the 24th annual conference on Computer graphics and interactive techniques. 421–430.
[24]
Xiaoliang Dai, Ji Hou, Chih-Yao Ma, Sam Tsai, Jialiang Wang, Rui Wang, Peizhao Zhang, Simon Vandenhende, Xiaofang Wang, Abhimanyu Dubey, Matthew Yu, Abhishek Kadian, Filip Radenovic, Dhruv Mahajan, Kunpeng Li, Yue Zhao, Vladan Petrovic, Mitesh Kumar Singh, Simran Motwani, Yi Wen, Yiwen Song, Roshan Sumbaly, Vignesh Ramanathan, Zijian He, Peter Vajda, and Devi Parikh. 2023. Emu: Enhancing Image Generation Models Using Photogenic Needles in a Haystack. arXiv preprint arXiv:2309.15807 (2023).
[25]
Soheil Darabi, Eli Shechtman, Connelly Barnes, Dan B Goldman, and Pradeep Sen. 2012. Image melding: Combining inconsistent images using patch-based synthesis. ACM Transactions on graphics (TOG) 31, 4 (2012), 1–10.
[26]
David DeBry, Jonathan Gibbs, Devorah DeLeon Petty, and Nate Robins. 2002. Painting and rendering textures on unparameterized models. In Proceedings of the 29th annual conference on Computer graphics and interactive techniques. 763–768.
[27]
Prafulla Dhariwal and Alexander Nichol. 2021. Diffusion models beat gans on image synthesis. Advances in neural information processing systems 34 (2021), 8780–8794.
[28]
Stephen DiVerdi. 2013. A brush stroke synthesis toolbox. In Image and Video-Based Artistic Stylisation. Springer, 23–44.
[29]
Alexei A Efros and William T Freeman. 2001. Image quilting for texture synthesis and transfer. In Proceedings of the 28th annual conference on Computer graphics and interactive techniques. 341–346.
[30]
Foundry. 2023. Mari - High-resolution digital 3D texture painting. https://www.foundry.com/products/mari.
[31]
Jun Gao, Tianchang Shen, Zian Wang, Wenzheng Chen, Kangxue Yin, Daiqing Li, Or Litany, Zan Gojcic, and Sanja Fidler. 2022. GET3D: A Generative Model of High Quality 3D Textured Shapes Learned from Images. In Advances In Neural Information Processing Systems.
[32]
Leon A. Gatys, Alexander S. Ecker, and Matthias Bethge. 2015. Texture synthesis and the controlled generation of natural stimuli using convolutional neural networks. ArXiv abs/1505.07376 (2015).
[33]
Ian J Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial networks. arXiv preprint arXiv:1406.2661 (2014).
[34]
Eric Heitz, Kenneth Vanhoey, Thomas Chambon, and Laurent Belcour. 2021a. A sliced wasserstein loss for neural texture synthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 9412–9420.
[35]
Eric Heitz, Kenneth Vanhoey, Thomas Chambon, and Laurent Belcour. 2021b. A Sliced Wasserstein Loss for Neural Texture Synthesis. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
[36]
Philipp Henzler, Niloy J Mitra, and Tobias Ritschel. 2020. Learning a neural 3d texture space from 2d exemplars. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 8356–8364.
[37]
Aaron Hertzmann. 1998. Painterly rendering with curved brush strokes of multiple sizes. In Proceedings of the 25th annual conference on Computer graphics and interactive techniques. 453–460.
[38]
Aaron Hertzmann. 2002. Fast paint texture. In Proceedings of the 2nd International Symposium on Non-photorealistic Animation and Rendering. 91–ff.
[39]
Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. 2017. GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium. In Advances in Neural Information Processing Systems, I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (Eds.). Vol. 30. Curran Associates, Inc.https://proceedings.neurips.cc/paper_files/paper/2017/file/8a1d694707eb0fefe65871369074926d-Paper.pdf
[40]
Jonathan Ho, Ajay Jain, and Pieter Abbeel. 2020. Denoising Diffusion Probabilistic Models. In Advances in Neural Information Processing Systems (NeurIPS).
[41]
Jonathan Ho and Tim Salimans. 2022. Classifier-free diffusion guidance. arXiv preprint arXiv:2207.12598 (2022).
[42]
Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2021. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685 (2021).
[43]
Zhewei Huang, Wen Heng, and Shuchang Zhou. 2019. Learning to paint with model-based deep reinforcement learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision. 8709–8718.
[44]
Tero Karras, Samuli Laine, and Timo Aila. 2019. A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 4401–4410.
[45]
Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. 2020. Analyzing and improving the image quality of stylegan. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 8110–8119.
[46]
Alexandre Kaspar, Boris Neubert, Dani Lischinski, Mark Pauly, and Johannes Kopf. 2015. Self tuning texture optimization. In Computer Graphics Forum, Vol. 34. Wiley Online Library, 349–359.
[47]
Salman Khan, Muzammal Naseer, Munawar Hayat, Syed Waqas Zamir, Fahad Shahbaz Khan, and Mubarak Shah. 2022. Transformers in vision: A survey. ACM computing surveys (CSUR) 54, 10s (2022), 1–41.
[48]
Mikyung Kim and Hyun Joon Shin. 2010. An Example-based Approach to Synthesize Artistic Strokes using Graphs. In Computer Graphics Forum, Vol. 29. Wiley Online Library, 2145–2152.
[49]
Vivek Kwatra, Irfan Essa, Aaron Bobick, and Nipun Kwatra. 2005. Texture optimization for example-based synthesis. In ACM SIGGRAPH 2005 Papers. 795–802.
[50]
Yuseung Lee, Kunho Kim, Hyunjin Kim, and Minhyuk Sung. 2023. SyncDiffusion: Coherent Montage via Synchronized Joint Diffusions. In Proceedings of the Thirty-seventh Conference on Neural Information Processing Systems.
[51]
Sylvain Lefebvre and Hugues Hoppe. 2006. Appearance-space texture synthesis. ACM Transactions on Graphics (TOG) 25, 3 (2006), 541–548.
[52]
Chuan Li and Michael Wand. 2016. Precomputed real-time texture synthesis with markovian generative adversarial networks. In European conference on computer vision. Springer, 702–716.
[53]
Yijun Li, Chen Fang, Jimei Yang, Zhaowen Wang, Xin Lu, and Ming-Hsuan Yang. 2017. Diversified texture synthesis with feed-forward networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 3920–3928.
[54]
Chen-Hsuan Lin, Jun Gao, Luming Tang, Towaki Takikawa, Xiaohui Zeng, Xun Huang, Karsten Kreis, Sanja Fidler, Ming-Yu Liu, and Tsung-Yi Lin. 2022. Magic3D: High-Resolution Text-to-3D Content Creation. arXiv preprint arXiv:2211.10440 (2022).
[55]
Difan Liu, Matthew Fisher, Aaron Hertzmann, and Evangelos Kalogerakis. 2021. Neural Strokes: Stylized Line Drawing of 3D Shapes. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). 14204–14213.
[56]
Guilin Liu, Rohan Taori, Ting-Chun Wang, Zhiding Yu, Shiqiu Liu, Fitsum A Reda, Karan Sapra, Andrew Tao, and Bryan Catanzaro. 2020. Transposer: Universal Texture Synthesis Using Feature Maps as Transposed Convolution Filter. arXiv preprint arXiv:2007.07243 (2020).
[57]
Jingwan Lu, Connelly Barnes, Stephen DiVerdi, and Adam Finkelstein. 2013. Realbrush: Painting with examples of physical media. ACM Transactions on Graphics (TOG) 32, 4 (2013), 1–12.
[58]
Andreas Lugmayr, Martin Danelljan, Andres Romero, Fisher Yu, Radu Timofte, and Luc Van Gool. 2022. Repaint: Inpainting using denoising diffusion probabilistic models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 11461–11471.
[59]
Michal Lukáč, Jakub Fišer, Paul Asente, Jingwan Lu, Eli Shechtman, and Daniel Sỳkora. 2015. Brushables: Example-based Edge-aware Directional Texture Painting. In Computer Graphics Forum, Vol. 34. Wiley Online Library, 257–267.
[60]
Michal Lukáč, Jakub Fišer, Jean-Charles Bazin, Ondřej Jamriška, Alexander Sorkine-Hornung, and Daniel Sỳkora. 2013. Painting by feature: texture boundaries for example-based image creation. ACM Transactions on Graphics (TOG) 32, 4 (2013), 1–8.
[61]
Miles Macklin. 2022. Warp: A High-performance Python Framework for GPU Simulation and Graphics. https://github.com/nvidia/warp. NVIDIA GPU Technology Conference (GTC).
[62]
Ian Mallett, Larry Seiler, and Cem Yuksel. 2020. Patch textures: Hardware support for mesh colors. IEEE Transactions on Visualization and Computer Graphics (2020).
[63]
Morteza Mardani, Guilin Liu, Aysegul Dundar, Shiqiu Liu, Andrew Tao, and Bryan Catanzaro. 2020. Neural ffts for universal texture image synthesis. Advances in Neural Information Processing Systems 33 (2020).
[64]
Yifang Men, Zhouhui Lian, Yingmin Tang, and Jianguo Xiao. 2018. A common framework for interactive texture transfer. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 6353–6362.
[65]
Reiichiro Nakano. 2019. Neural painters: A learned differentiable constraint for generating brushstroke paintings. arXiv preprint arXiv:1904.08410 (2019).
[66]
Alex Nichol, Prafulla Dhariwal, Aditya Ramesh, Pranav Shyam, Pamela Mishkin, Bob McGrew, Ilya Sutskever, and Mark Chen. 2021. Glide: Towards photorealistic image generation and editing with text-guided diffusion models. arXiv preprint arXiv:2112.10741 (2021).
[67]
NVIDIA. 2022. TensorRT Open Source Software. https://github.com/NVIDIA/TensorRT.
[68]
NVIDIA. 2023. NVIDIA Omniverse: The platform for connecting and developing OpenUSD applications. https://www.nvidia.com/en-us/omniverse/.
[69]
Pilgway. 2023. 3D Coat. https://3dcoat.com/.
[70]
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, 2021. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning. PMLR, 8748–8763.
[71]
Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. 2022. Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125 (2022).
[72]
Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. 2021. Zero-shot text-to-image generation. In International Conference on Machine Learning. PMLR, 8821–8831.
[73]
Elad Richardson, Gal Metzer, Yuval Alaluf, Raja Giryes, and Daniel Cohen-Or. 2023. Texture: Text-guided texturing of 3d shapes. arXiv preprint arXiv:2302.01721 (2023).
[74]
Lincoln Ritter, Wilmot Li, Brian Curless, Maneesh Agrawala, and David Salesin. 2006. Painting With Texture. In Rendering Techniques. 371–376.
[75]
Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. 2022a. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 10684–10695.
[76]
Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. 2022b. Stable-Diffusion-Inpainting. https://huggingface.co/runwayml/stable-diffusion-inpainting.
[77]
Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Burcu Karagol Ayan, S Sara Mahdavi, Rapha Gontijo Lopes, 2022. Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding. arXiv preprint arXiv:2205.11487 (2022).
[78]
Sam Sartor and Pieter Peers. 2023. MatFusion: A Generative Diffusion Model for SVBRDF Capture. In SIGGRAPH Asia 2023 Conference Papers (, Sydney, NSW, Australia,) (SA ’23). Association for Computing Machinery, New York, NY, USA, Article 86, 10 pages. https://doi.org/10.1145/3610548.3618194
[79]
Axel Sauer, Dominik Lorenz, Andreas Blattmann, and Robin Rombach. 2023. Adversarial Diffusion Distillation. arXiv preprint arXiv:2311.17042 (2023).
[80]
Ryan Schmidt, Cindy Grimm, and Brian Wyvill. 2006. Interactive decal compositing with discrete exponential maps. In ACM SIGGRAPH 2006 Papers. 605–613.
[81]
Maria Shugrina, Chin-Ying Li, and Sanja Fidler. 2022. Neural Brushstroke Engine: Learning a Latent Style Space of Interactive Drawing Tools. ACM Transactions on Graphics (TOG) 41, 6 (2022), 1–18.
[82]
Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. 2015. Deep Unsupervised Learning using Nonequilibrium Thermodynamics. In International Conference on Machine Learning (ICML).
[83]
Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. 2021. Score-Based Generative Modeling through Stochastic Differential Equations. In International Conference on Learning Representations (ICLR).
[84]
Tuur Stuyck, Fang Da, Sunil Hadap, and Philip Dutré. 2017. Real-Time Oil Painting on Mobile Hardware. In Computer Graphics Forum, Vol. 36. Wiley Online Library, 69–79.
[85]
Giuseppe Vecchio, Rosalie Martin, Arthur Roullier, Adrien Kaiser, Romain Rouffet, Valentin Deschaintre, and Tamy Boubekeur. 2023. ControlMat: Controlled Generative Approach to Material Capture. arXiv preprint arXiv:2309.01700 (2023).
[86]
Rundong Wu, Zhili Chen, Zhaowen Wang, Jimei Yang, and Steve Marschner. 2018. Brush stroke synthesis with a generative adversarial network driven by physically based simulation. In Proceedings of the Joint Symposium on Computational Aesthetics and Sketch-Based Interfaces and Modeling and Non-Photorealistic Animation and Rendering. 1–10.
[87]
Binxin Yang, Shuyang Gu, Bo Zhang, Ting Zhang, Xuejin Chen, Xiaoyan Sun, Dong Chen, and Fang Wen. 2023. Paint by Example: Exemplar-based Image Editing with Diffusion Models. In 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). 18381–18391.
[88]
Kim Youwang, Tae-Hyun Oh, and Gerard Pons-Moll. 2023. Paint-it: Text-to-Texture Synthesis via Deep Convolutional Texture Map Optimization and Physically-Based Rendering. arXiv preprint arXiv:2312.11360 (2023).
[89]
Ning Yu, Connelly Barnes, Eli Shechtman, Sohrab Amirghodsi, and Michal Lukáč. 2019. Texture Mixer: A Network for Controllable Synthesis and Interpolation of Texture. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
[90]
Liang Yuan, Dingkun Yan, Suguru Saito, and Issei Fujishiro. 2024. DiffMat: Latent diffusion models for image-guided material generation. Visual Informatics 8, 1 (2024), 6–14. https://doi.org/10.1016/j.visinf.2023.12.001
[91]
Cem Yuksel, John Keyser, and Donald H House. 2010. Mesh colors. ACM Transactions on Graphics (TOG) 29, 2 (2010), 1–11.
[92]
Cem Yuksel, Sylvain Lefebvre, and Marco Tarini. 2019. Rethinking texture mapping. In Computer Graphics Forum, Vol. 38. Wiley Online Library, 535–551.
[93]
Qinsheng Zhang, Jiaming Song, Xun Huang, Yongxin Chen, and Ming-Yu Liu. 2023. DiffCollage: Parallel Generation of Large Content with Diffusion Models. arXiv preprint arXiv:2303.17076 (2023).
[94]
Ningyuan Zheng, Yifan Jiang, and Dingjiang Huang. 2018. Strokenet: A neural painting environment. In International Conference on Learning Representations.
[95]
Kun Zhou, Xi Wang, Yiying Tong, Mathieu Desbrun, Baining Guo, and Heung-Yeung Shum. 2005. Texturemontage. In ACM SIGGRAPH 2005 Papers. 1148–1155.
[96]
Yang Zhou, Kaijian Chen, Rongjun Xiao, and Hui Huang. 2023. Neural Texture Synthesis With Guided Correspondence. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 18095–18104.
[97]
Yang Zhou, Zhen Zhu, Xiang Bai, Dani Lischinski, Daniel Cohen-Or, and Hui Huang. 2018. Non-stationary texture synthesis by adversarial expansion. arXiv preprint arXiv:1805.04487 (2018).

Cited By

View all

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
SIGGRAPH '24: ACM SIGGRAPH 2024 Conference Papers
July 2024
1106 pages
ISBN:9798400705250
DOI:10.1145/3641519
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 13 July 2024

Check for updates

Author Tags

  1. 3D Mesh
  2. Conditional Inpainting
  3. Diffusion Models
  4. Generative Models
  5. Paint Systems
  6. Textures

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

SIGGRAPH '24
Sponsor:

Acceptance Rates

Overall Acceptance Rate 1,822 of 8,601 submissions, 21%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)1,479
  • Downloads (Last 6 weeks)412
Reflects downloads up to 10 Dec 2024

Other Metrics

Citations

Cited By

View all

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media