[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
research-article

StyleTerrain: : A novel disentangled generative model for controllable high-quality procedural terrain generation

Published: 04 March 2024 Publication History

Abstract

Terrain is a vital element in construction of virtual scene in the digital era. Despite considerable progress has been made in Generative Adversarial Network (GAN) based terrain modeling methods, their quality and controllability still cannot meet up the requirements of many emerging industries. The present work proposes a novel disentangled generative model, named as StyleTerrain, for achieving controllable high-quality terrain generation. It introduces disentangled representation learning into GAN-based terrain modeling methods for the first time. The model has been evaluated quantitatively. The results show a significantly short perceptual path length and the effectiveness of the disentanglement mechanism in controllable terrain generation, indicating that latent space disentanglement is a promising future direction for achieving generation controllability in GAN-based terrain modeling methods.

Graphical abstract

Display Omitted

Highlights

Disentanglement is introduced into GAN-based terrain modeling for the first time.
StyleTerrain achieves superior generation quality and controllability.
Disentanglement is promising in future development of GAN-based terrain modeling.

References

[1]
Galin E., Guérin E., Peytavie A., Cordonnier G., Cani M.P., Benes B., et al., A review of digital terrain modeling, Comput Graph Forum 38 (2) (2019) 553–577.
[2]
Fournier A., Fussell D., Carpenter L., Computer rendering of stochastic models, Commun ACM 25 (6) (1982) 371–384.
[3]
Mandelbrot B.B., Mandelbrot B.B., The fractal geometry of nature, Vol. 1, WH freeman New York, 1982.
[4]
Perlin K., An image synthesizer, ACM Siggraph Comput Graph 19 (3) (1985) 287–296.
[5]
Parberry I., Modeling real-world terrain with exponentially distributed noise, J Comput Graph Techn 4 (2) (2015) 1–9.
[6]
Génevaux J.-D., Galin É., Guérin E., Peytavie A., Benes B., Terrain generation using procedural models based on hydrology, ACM Trans Graph 32 (4) (2013) 1–13.
[7]
Génevaux J.D., Galin E., Peytavie A., Guérin E., Briquet C., Grosbellet F., et al., Terrain modelling from feature primitives, Comput Graph Forum 34 (6) (2015) 198–210.
[8]
Guérin E., Digne J., Galin E., Peytavie A., Sparse representation of terrains for procedural modeling, Comput Graph Forum 35 (2) (2016) 177–187.
[9]
Roudier P., Peroche B., Perrin M., Landscapes synthesis achieved through erosion and deposition process simulation, Computer graphics forum, vol. 12, Wiley Online Library, 1993, pp. 375–383.
[10]
Št’ava O, Beneš B, Brisbin M, Křivánek J. Interactive terrain modeling using hydraulic erosion. In: Proceedings of the 2008 acm siggraph/eurographics symposium on computer animation. 2008, p. 201–10.
[11]
Cordonnier G., Galin E., Gain J., Benes B., Guérin E., Peytavie A., et al., Authoring landscapes by combining ecosystem and terrain erosion simulation, ACM Trans Graph 36 (4) (2017) 1–12.
[12]
Zhou H., Sun J., Turk G., Rehg J.M., Terrain synthesis from digital elevation models, IEEE Trans Vis Comput Graphics 13 (4) (2007) 834–848.
[13]
Gain J., Merry B., Marais P., Parallel, realistic and controllable terrain synthesis, Comput Graph Forum 34 (2) (2015) 105–116.
[14]
Tasse F., Gain J., Marais P., Enhanced texture-based terrain synthesis on graphics hardware, Comput Graph Forum 31 (6) (2012) 1959–1972.
[15]
Goodfellow I., Pouget-Abadie J., Mirza M., Xu B., Warde-Farley D., Ozair S., et al., Generative adversarial networks, Commun ACM 63 (11) (2020) 139–144.
[16]
Mirza M., Osindero S., Conditional generative adversarial nets, 2014, arXiv preprint arXiv:1411.1784.
[17]
Zhu J-Y, Park T, Isola P, Efros AA. Unpaired Image-To-Image Translation Using Cycle-consistent Adversarial Networks. In: Proceedings of the IEEE international conference on computer vision. 2017, p. 2223–32.
[18]
Mattull W., Toward improving procedural terrain generation with GANs, [Ph.D. thesis] Harvard University, 2020.
[19]
Guérin É., Digne J., Galin E., Peytavie A., Wolf C., Benes B., et al., Interactive example-based terrain authoring with conditional generative adversarial networks, ACM Trans Graph 36 (6) (2017) 1–13.
[20]
Voulgaris G., Mademlis I., Pitas I., Procedural terrain generation using generative adversarial networks, in: 2021 29th European signal processing conference, IEEE, 2021, pp. 686–690.
[21]
Bau D, Zhu JY, Wulff J, Peebles W, Strobelt H, Zhou B, et al. Seeing What a GAN Cannot Generate. In: Proceedings of the IEEE/CVF international conference on computer vision. 2019, p. 4502–11.
[22]
Odena A., Dumoulin V., Olah C., Deconvolution and checkerboard artifacts, Distill 1 (10) (2016).
[23]
Spick R.J., Cowling P., Walker J.A., Procedural generation using spatial GANs for region-specific learning of elevation data, in: 2019 IEEE conference on games, IEEE, 2019, pp. 1–8.
[24]
Locatello F., Bauer S., Lucic M., Raetsch G., Gelly S., Schölkopf B., et al., Challenging common assumptions in the unsupervised learning of disentangled representations, in: International conference on machine learning, PMLR, 2019, pp. 4114–4124.
[25]
Radford A., Metz L., Chintala S., Unsupervised representation learning with deep convolutional generative adversarial networks, 2015, arXiv preprint arXiv:1511.06434.
[26]
Karras T, Laine S, Aittala M, Hellsten J, Lehtinen J, Aila T. Analyzing and Improving the Image Quality of StyleGAN. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020, p. 8110–9.
[27]
Karras T, Laine S, Aila T. A Style-Based Generator Architecture for Generative Adversarial Networks. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2019, p. 4401–10.
[28]
Bengio Y., Courville A., Vincent P., Representation learning: A review and new perspectives, IEEE Trans Pattern Anal Mach Intell 35 (8) (2013) 1798–1828.
[29]
Singh KK, Ojha U, Lee YJ. FineGAN: Unsupervised Hierarchical Disentanglement for Fine-Grained Object Generation and Discovery. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2019, p. 6490–9.
[30]
Kingma D.P., Welling M., Auto-encoding variational Bayes, 2013, arXiv preprint arXiv:1312.6114.
[31]
Chan K.C., Wang X., Xu X., Gu J., Loy C.C., GLEAN: Generative latent bank for large-factor image super-resolution, in: 2021 IEEE/CVF conference on computer vision and pattern recognition, 2021, pp. 14240–14249,.
[32]
Simonyan K., Zisserman A., Very deep convolutional networks for large-scale image recognition, 2014, arXiv preprint arXiv:1409.1556.
[33]
Laine S., Feature-based metrics for exploring the latent space of generative models, in: 6th International conference on learning representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Workshop Track Proceedings, OpenReview.net, 2018, URL https://openreview.net/forum?id=BJslDBkwG.
[34]
Li Y., Fang C., Yang J., Wang Z., Lu X., Yang M.-H., Universal style transfer via feature fransforms, Adv Neural Inf Process Syst 30 (2017).
[35]
Gatys LA, Ecker AS, Bethge M. Image Style Transfer Using Convolutional Neural Networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2016, p. 2414–23.
[36]
Ulyanov D., Vedaldi A., Lempitsky V., Instance normalization: The missing ingredient for fast stylization, 2016, arXiv preprint arXiv:1607.08022.
[37]
Dumoulin V., Shlens J., Kudlur M., A learned representation for artistic style, 2016, arXiv preprint arXiv:1610.07629.
[38]
Huang X, Belongie S. Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization. In: Proceedings of the IEEE international conference on computer vision. 2017, p. 1501–10.
[39]
Karras T., Aila T., Laine S., Lehtinen J., Progressive growing of GANs for improved quality, stability, and variation, 2017, arXiv preprint arXiv:1710.10196.
[40]
Karnewar A, Wang O. MSG-GAN: Multi-Scale Gradients for Generative Adversarial Networks. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020, p. 7799–808.
[41]
He K, Zhang X, Ren S, Sun J. Deep Residual Learning for Image Recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2016, p. 770–8.
[42]
NASA/METI/AIST/Japan Spacesystems and U.S./Japan ASTER Science Team K., ASTER global digital elevation model V003, 2019,.
[43]
Efros AA, Freeman WT. Image Quilting for Texture Synthesis and Transfer. In: SIGGRAPH ’01. 2001, p. 341–6.
[44]
Heusel M., Ramsauer H., Unterthiner T., Nessler B., Hochreiter S., GANs trained by a two time-scale update rule converge to a local Nash equilibrium, Adv Neural Inf Process Syst 30 (2017).
[45]
Perche S., Peytavie A., Benes B., Galin E., Guérin E., StyleDEM: A versatile model for authoring terrains, 2023, arXiv:2304.09626.
[46]
Beneš B., Physically-based hydraulic erosion, in: Proceedings of the 22nd spring conference on computer graphics, Association for Computing Machinery, 2006, pp. 17–22 PAGE@5.
[47]
Andrino D., Terrain-erosion-3-ways, 2021, URL https://github.com/dandrino/terrain-erosion-3-ways.
[48]
Delaunay B., et al., Sur la sphere vide, Izv Akad Nauk SSSR, Otdelenie Matematicheskii i Estestvennyka Nauk 7 (793–800) (1934) 1–2.
[49]
Lagae A., Dutré P., A procedural object distribution function, ACM Trans Graph 24 (4) (2005) 1442–1461.
[50]
Spick R.J., Cowling P., Walker J.A., Procedural generation using spatial GANs for region-specific learning of elevation data, in: 2019 IEEE conference on games, IEEE, 2019, pp. 1–8.

Cited By

View all

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Computers and Graphics
Computers and Graphics  Volume 116, Issue C
Nov 2023
518 pages

Publisher

Pergamon Press, Inc.

United States

Publication History

Published: 04 March 2024

Author Tags

  1. Procedural content generation
  2. Generative adversarial network
  3. Representation learning
  4. Virtual scene

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 19 Dec 2024

Other Metrics

Citations

Cited By

View all

View Options

View options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media