CN117456077B - Material map generation method and related equipment - Google Patents
Material map generation method and related equipment Download PDFInfo
- Publication number
- CN117456077B CN117456077B CN202311435223.6A CN202311435223A CN117456077B CN 117456077 B CN117456077 B CN 117456077B CN 202311435223 A CN202311435223 A CN 202311435223A CN 117456077 B CN117456077 B CN 117456077B
- Authority
- CN
- China
- Prior art keywords
- picture
- texture
- pictures
- map
- mask region
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 70
- 239000000463 material Substances 0.000 title claims abstract description 66
- 230000008569 process Effects 0.000 claims abstract description 25
- 230000001172 regenerating effect Effects 0.000 claims abstract description 12
- 238000004590 computer program Methods 0.000 claims description 14
- 238000009792 diffusion process Methods 0.000 claims description 10
- 238000013136 deep learning model Methods 0.000 claims description 9
- RKTYLMNFRDHKIL-UHFFFAOYSA-N copper;5,10,15,20-tetraphenylporphyrin-22,24-diide Chemical compound [Cu+2].C1=CC(C(=C2C=CC([N-]2)=C(C=2C=CC=CC=2)C=2C=CC(N=2)=C(C=2C=CC=CC=2)C2=CC=C3[N-]2)C=2C=CC=CC=2)=NC1=C3C1=CC=CC=C1 RKTYLMNFRDHKIL-UHFFFAOYSA-N 0.000 claims description 4
- 230000000694 effects Effects 0.000 description 17
- 238000012549 training Methods 0.000 description 15
- 238000010586 diagram Methods 0.000 description 8
- 238000005286 illumination Methods 0.000 description 8
- 238000013507 mapping Methods 0.000 description 8
- 238000009877 rendering Methods 0.000 description 8
- 238000012545 processing Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 5
- 239000013077 target material Substances 0.000 description 5
- 230000005540 biological transmission Effects 0.000 description 4
- 238000006073 displacement reaction Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 239000000109 continuous material Substances 0.000 description 3
- 239000002184 metal Substances 0.000 description 3
- 239000003086 colorant Substances 0.000 description 2
- 238000012217 deletion Methods 0.000 description 2
- 230000037430 deletion Effects 0.000 description 2
- 239000004744 fabric Substances 0.000 description 2
- 239000011521 glass Substances 0.000 description 2
- 230000014759 maintenance of location Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 239000004033 plastic Substances 0.000 description 2
- 238000002310 reflectometry Methods 0.000 description 2
- 230000008439 repair process Effects 0.000 description 2
- 239000011435 rock Substances 0.000 description 2
- 239000002023 wood Substances 0.000 description 2
- 244000025254 Cannabis sativa Species 0.000 description 1
- 239000010426 asphalt Substances 0.000 description 1
- 238000013475 authorization Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000033228 biological regulation Effects 0.000 description 1
- 239000011449 brick Substances 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000002950 deficient Effects 0.000 description 1
- -1 gravel Substances 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000008054 signal transmission Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 239000004575 stone Substances 0.000 description 1
- 230000003746 surface roughness Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/04—Texture mapping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/32—Indexing scheme for image data processing or generation, in general involving image mosaicing
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Databases & Information Systems (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Computer Graphics (AREA)
- Image Generation (AREA)
Abstract
The specification provides a material map generation method and related equipment. The method comprises the following steps: acquiring a first picture containing pattern textures; randomly generating a mask region in the first picture, deleting picture contents corresponding to the mask region in the first picture, regenerating the picture contents corresponding to the mask region through an image generation model to obtain a second picture, and repeatedly executing the processes until a plurality of second pictures with preset numbers are obtained; generating a plurality of corresponding material description files based on the plurality of second pictures; each of the plurality of texture description files includes a plurality of texture maps for describing different texture properties of a surface texture of the object.
Description
Technical Field
One or more embodiments of the present disclosure relate to the field of image processing technologies, and in particular, to a material map generating method and related devices.
Background
In the process of constructing a three-dimensional scene, in order to reduce resource consumption and ensure effects and precision, one common method is to adopt square continuous material balls as textures to be tiled on plane-like objects such as ground, wall surfaces and the like. Wherein the material balls define basic properties of the object surface, including color, reflectivity, refractive index, roughness, transparency, etc. By adjusting parameters of the material ball, various different materials such as metal, plastic, glass, cloth and the like can be simulated, so that the object has a realistic appearance in the rendering process.
However, if only a single material ball is adopted for tiling, the problem of repeated textures can easily occur, so that the rendered object is lost in reality and poor in appearance, and the overall effect of the three-dimensional scene is reduced. If a plurality of balls made of different materials are adopted for tiling, the problems of discontinuous textures, non-uniform styles and the like can occur, and the reality of the object obtained by rendering can be lost.
Disclosure of Invention
In view of this, one or more embodiments of the present disclosure provide a method and related apparatus for generating a texture map.
In a first aspect, the present disclosure provides a method for generating a texture map, the method including:
Acquiring a first picture containing pattern textures;
Randomly generating a mask region in the first picture, deleting picture contents corresponding to the mask region in the first picture, regenerating the picture contents corresponding to the mask region through an image generation model to obtain a second picture, and repeatedly executing the processes until a plurality of second pictures with preset numbers are obtained;
Generating a plurality of corresponding material description files based on the plurality of second pictures; each of the plurality of texture description files includes a plurality of texture maps for describing different texture properties of a surface texture of the object.
In an illustrated embodiment, the first picture and the plurality of second pictures are each tetragonal continuous views.
In an embodiment, the randomly generating a mask region in the first picture, deleting the picture content corresponding to the mask region in the first picture, includes:
Randomly generating a mask region in the first picture, and determining whether the duty ratio of the mask region in the first picture is larger than a preset threshold value;
If yes, deleting the picture content corresponding to the mask region in the first picture; otherwise, randomly generating a mask region in the first picture again.
In an illustrated embodiment, the mask region does not cover four side edge regions in the first picture.
In an embodiment, the generating a plurality of corresponding material description files based on the plurality of second pictures includes:
Respectively inputting the plurality of second pictures into a pre-trained material map generation model to generate a plurality of material description files corresponding to the plurality of second pictures; wherein the texture map generation model is a deep learning model.
In an illustrated embodiment, the image generation model includes a diffusion model; or a pre-trained model for generating pictures based on the input pictures.
In one illustrated embodiment, the plurality of texture maps includes any combination of the plurality of texture maps illustrated below:
a map for describing a pattern texture of the object surface;
A map for describing a normal texture of a surface of an object;
a map for describing the relief texture of the object surface;
A map for describing a shadow texture of a surface of an object;
a map for describing the roughness texture of the surface of an object.
In a second aspect, the present disclosure provides a texture map generating apparatus, the apparatus comprising:
An acquisition unit for acquiring a first picture containing pattern textures;
The image generation unit is used for randomly generating a mask area in the first image, deleting the image content corresponding to the mask area in the first image, regenerating the image content corresponding to the mask area through an image generation model to obtain a second image, and repeatedly executing the above processes until a plurality of second images with preset quantity are obtained;
The material map generating unit is used for generating a plurality of corresponding material description files based on the plurality of second pictures; each of the plurality of texture description files includes a plurality of texture maps for describing different texture properties of a surface texture of the object.
Accordingly, the present specification also provides a computer apparatus comprising: a memory and a processor; the memory has stored thereon a computer program executable by the processor; the processor executes the texture map generation method according to the above embodiments when executing the computer program.
Accordingly, the present disclosure also provides a computer readable storage medium having a computer program stored thereon, where the computer program, when executed by a processor, performs the texture map generating method according to the above embodiments.
In summary, the present application may first obtain a first picture including a pattern texture. And randomly generating a mask region in the first picture, deleting picture contents corresponding to the mask region in the first picture, and further regenerating the picture contents corresponding to the mask region through an image generation model to obtain second pictures, and repeatedly executing the processes until a plurality of second pictures with preset quantity are obtained. Finally, the application can generate a plurality of corresponding material description files based on the plurality of second pictures. Wherein each of the plurality of texture description files may include a plurality of texture maps for describing different texture properties of the surface texture of the object. Therefore, the application can obtain a plurality of different pictures based on the existing picture containing pattern textures by randomly modifying part of the picture content for a plurality of times, and further generate a plurality of corresponding material description files based on the plurality of different pictures to render the surface of the object, thereby greatly improving the authenticity of the object and ensuring the overall effect of the three-dimensional scene.
Drawings
FIG. 1 is a schematic diagram of a system architecture provided by an exemplary embodiment;
FIG. 2 is a flowchart of a method for generating a texture map according to an exemplary embodiment;
FIG. 3 is a schematic diagram of a picture processing flow provided by an exemplary embodiment;
FIG. 4 is a schematic view of a splicing effect of a square continuous graph according to an exemplary embodiment;
FIG. 5 is a schematic diagram of a texture map generating apparatus according to an exemplary embodiment;
Fig. 6 is a schematic diagram of a computer device according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary embodiments do not represent all implementations consistent with one or more embodiments of the present specification. Rather, they are merely examples of apparatus and methods consistent with aspects of one or more embodiments of the present description as detailed in the accompanying claims.
It should be noted that: in other embodiments, the steps of the corresponding method are not necessarily performed in the order shown and described in this specification. In some other embodiments, the method may include more or fewer steps than described in this specification. Furthermore, individual steps described in this specification, in other embodiments, may be described as being split into multiple steps; while various steps described in this specification may be combined into a single step in other embodiments.
The term "plurality" as used herein means two or more.
The user information (including but not limited to user equipment information, user personal information, etc.) and data (including but not limited to data for analysis, stored data, presented data, etc.) related to the present application are information and data authorized by the user or fully authorized by each party, and the collection, use and processing of related data is required to comply with the relevant laws and regulations and standards of the relevant country and region, and is provided with corresponding operation entries for the user to select authorization or rejection.
First, some terms in the present specification are explained for easy understanding by those skilled in the art.
(1) The material ball is one of important elements for simulating the surface material and illumination effect of an object. The material balls define basic material properties of the object surface including color, reflectivity, refractive index, roughness, transparency, etc. By adjusting parameters of the material ball, various different materials such as metal, plastic, glass, cloth and the like can be simulated, so that the object has a realistic appearance in the rendering process. In addition, the material ball also interacts with illumination, and determines the reflection and refraction behaviors of the object surface when the object surface is irradiated by the illumination. Different material balls can show different reflection characteristics for light rays, such as diffuse reflection, specular reflection, ambient light reflection and the like. By adjusting parameters of the material balls, the illumination effect of the object can be controlled, so that details such as shadows, highlights and the like are generated in the rendering result, and the sense of reality is enhanced.
The texture ball file may include a plurality of physical render-Based (PBR) texture maps for describing the texture of the object surface, including, for example, color maps, normal maps, displacement maps, ambient occlusion (Ambient Occlusion, AO) maps, roughness maps, and the like.
Among them, color mapping, also called diffuse reflection mapping, is a common texture mapping technique for adding colors and patterns to the surface of a 3D model. Color mapping is a 2D image that is typically used to simulate the details of colors, designs, patterns, etc. of the surface of an object. For example, in games, color mapping may be used to make a brick wall, a piece of grass, or a surface texture of a piece of clothing.
Wherein the normal map is used to simulate tiny geometric shapes on the 3D model surface. The normal map is also effectively a 2D image, and the normal vector direction of the surface can be adjusted by modifying the RGB values of each pixel. By doing so, the 3D model can reflect light rays more truly when rendering, so that detail effect is enhanced. For example, in games, normal line mapping may be used to add roughness details to the surface of stone walls, wood or metal.
Wherein, the ambient occlusion (Ambient Occlusion, AO) map is used for simulating shadows generated between objects, and the volume sense is increased when no light is emitted.
The displacement mapping is a mapping technology for changing the geometric shape of a model, and by storing height information on the geometric surface and applying the information to the model surface, the topology structure of a model grid is modified and distorted, so that a more realistic surface detail effect is created.
Wherein, roughness map is used for defining roughness information of materials, 0 (black-0 sRGB) represents smoothness, and 1 (white-255 sRGB) represents roughness. Roughness refers to surface irregularities that cause light diffusion, and the reflection direction freely varies according to the surface roughness.
(2) The square continuous refers to that one picture is repeatedly tiled in the up, down, left and right directions, and the pattern textures at the joint between the pictures are continuous.
(3) A mask is a binary image in which the pixel value is only 0 or 255. Wherein a region with a pixel value of 0 represents the background and a region with a pixel value of 255 represents the foreground or object of interest. The mask is applied to the image, so that the region of interest in the image can be conveniently extracted for further processing or analysis.
Three-dimensional scenes are widely used in fields such as virtual shooting and games, for example, in a virtual shooting scene, a background picture can be rendered on a screen based on the three-dimensional scene, and shooting is performed together with a foreground in front of the screen to obtain a required shooting picture. As described above, in the process of constructing a three-dimensional scene, in order to reduce resource consumption and ensure effects and accuracy, one common method is to use square continuous material balls as textures to be tiled on plane-like objects such as floors and walls. However, if only a single material ball is adopted for tiling, the problem of repeated textures can easily occur, so that the rendered object loses details and reality, has poor appearance and feel, and seriously reduces the overall effect of the three-dimensional scene. If a plurality of balls made of different materials are adopted for tiling, the problems of discontinuous textures, non-uniform styles and the like can occur, and the reality of the object obtained by rendering can be lost.
Based on this, the present specification provides a texture map generation scheme. According to the scheme, based on the existing picture containing pattern textures, a plurality of different pictures are obtained by randomly modifying part of the picture content for a plurality of times, and then a plurality of corresponding material description files are generated based on the different pictures to render the surface of the object, so that the reality of the appearance of the object is greatly improved, and the overall effect of the three-dimensional scene is ensured.
In implementation, the present application may first obtain a first picture including a pattern texture. And randomly generating a mask region in the first picture, deleting picture contents corresponding to the mask region in the first picture, and further regenerating the picture contents corresponding to the mask region through an image generation model to obtain second pictures, and repeatedly executing the processes until a plurality of second pictures with preset quantity are obtained. Finally, the application can generate a plurality of corresponding material description files based on the plurality of second pictures. Wherein each of the plurality of texture description files may include a plurality of texture maps for describing different texture properties of the surface texture of the object.
According to the technical scheme, based on the existing picture containing pattern textures, the part of picture content is modified randomly for a plurality of times to obtain a plurality of different pictures, and then a plurality of corresponding material description files are generated based on the plurality of different pictures to render the surface of the object, so that the authenticity of the object is improved to a great extent, and the integral effect of the three-dimensional scene is ensured.
Referring to fig. 1, fig. 1 is a schematic diagram of a system architecture according to an exemplary embodiment. As shown in fig. 2, the system may include a computer device 10 and a computer device 20. The data transmission between the computer device 10 and the computer device 20 may be performed by a wireless communication manner such as bluetooth, wi-Fi or a mobile network, or by a wired communication manner such as a data line.
In one illustrated embodiment, a plurality of material description files (e.g., material balls) may be stored in the computer device 20 that have been manufactured. Each texture description file may include a plurality of texture maps for describing different texture properties of the surface texture of the object.
Illustratively, the plurality of texture maps may include any combination of the plurality of texture maps shown below: a map (e.g., a color map) for describing the texture of the pattern of the object surface; a map (e.g., a normal map) for describing a normal texture of the object surface; a map (e.g., a displacement map) for describing the relief texture of the object surface; a map (e.g., an AO map) for describing a shadow texture of a surface of an object; a map for describing a roughness texture of the surface of the object (e.g., roughness map), etc., which is not particularly limited in this specification.
As shown in fig. 1, the computer device 10 may obtain a certain material description file stored in the computer device 20, and further obtain a color map (for example, a first picture) included in the material description file.
It should be noted that the specific implementation of how to obtain the color map is not particularly limited in the present application.
In an illustrated embodiment, the computer device 10 may send an acquisition request for the target texture description file to the computer device 20 based on the current rendering task, and in response, the computer device 20 may send its stored target texture description file to the computer device 10. Further, the computer device 10 may receive the target texture description file sent by the computer device 20, and extract a color map from the plurality of texture maps contained therein.
In an embodiment, the existing material description file may also be stored locally in the computer device 10, and accordingly, the computer device 10 may directly obtain the color map in the target material description file from the local storage space, which is not limited in this specification.
Further, the computer device 10 may randomly generate a mask region in the obtained first picture, delete the picture content corresponding to the mask region in the first picture, and further, the present application may regenerate the picture content corresponding to the mask region through the image generation model to obtain a second picture (i.e. a new color map), and repeatedly execute the above process until a preset number of second pictures are obtained.
The present application is not limited to the specific type of the image generation model. In an illustrated embodiment, the image generation model may be a Diffusion model (e.g., stable Diffusion). Wherein the diffusion model is actually a deep learning model. In an embodiment shown, the image generation model may also be a pre-trained model for generating pictures based on input pictures, i.e. a large model, etc., which is not particularly limited in this specification.
Further, the computer device 10 may generate a corresponding plurality of material description files (e.g., material balls) based on the plurality of second pictures. Each of the plurality of texture description files may include a plurality of texture maps for describing different texture properties of the surface texture of the object. Finally, the computer device 10 may employ the plurality of material descriptions (which may also include the target material description) to render the object surface.
In an embodiment, the first picture and the second picture may be square continuous pictures, and accordingly, each of a plurality of material descriptions generated by the second pictures may be square continuous material balls.
As described above, the method and the device can obtain a plurality of different pictures by randomly modifying part of the picture content in the existing picture containing pattern textures, and further generate a plurality of corresponding material description files to render the surface of the object based on the plurality of different pictures, so that the authenticity of the object is greatly improved, and the overall effect of the three-dimensional scene is ensured.
In an embodiment, the computer device 10 may be a smart wearable device, a smart phone, a tablet computer, a notebook computer, a desktop computer, a server, etc. with the functions described above, which is not specifically limited in this specification.
In the illustrated embodiment, the computer device 20 may be one server having the above-described functions, a server cluster including a plurality of servers, or the like, and the present disclosure is not limited thereto. By way of example, the computer device 20 may be a cloud storage device.
Referring to fig. 2, fig. 2 is a flowchart illustrating a method for generating a texture map according to an exemplary embodiment. The method may be applied to the system architecture shown in fig. 1, and in particular to the computer device 10 shown in fig. 1. As shown in fig. 2, the method may specifically include the following steps S201 to S203.
Step S201, a first picture including a pattern texture is acquired.
In an illustrated embodiment, a computer device may obtain a first picture containing a pattern texture.
In an embodiment, the first picture may be a real picture acquired by a camera, a virtual picture rendered by software, or a color map further made of the real picture or the virtual picture, which is not specifically limited in this specification.
In an embodiment, the first picture may also be a color map included in an existing material description file that has been made. Correspondingly, the computer device may search for a suitable target material description file from the plurality of material description files that are completed based on the current rendering task, and further obtain a color map included in the target material description file as the first picture.
In an illustrated embodiment, the first picture may be a tetragonal continuous graph.
Step S202, randomly generating a mask region in the first picture, deleting picture contents corresponding to the mask region in the first picture, regenerating the picture contents corresponding to the mask region through an image generation model to obtain a second picture, and repeatedly executing the above processes until a plurality of second pictures with preset numbers are obtained.
In an illustrated embodiment, after the first picture is obtained, the computer device may modify the first picture multiple times, thereby obtaining multiple new pictures.
It should be noted that, the specific implementation manner of modifying the first picture is not particularly limited in the present application.
In an embodiment, the computer device may randomly generate the mask region in the first picture, delete the picture content corresponding to the mask region in the first picture, and regenerate the picture content corresponding to the mask region to obtain the second picture (i.e. a new picture), and repeatedly execute the above process until a preset number of the second pictures are obtained.
The specific numerical values of the preset number are not particularly limited in the present application. The preset number may be, for example, 4 or 9, etc., which is not specifically limited in this specification, and may be specifically set according to actual requirements and device performance. In an embodiment, the preset number may also be 1, i.e. only one second picture is generated, which is not specifically limited in this specification.
In an illustrated embodiment, after randomly generating the mask region in the first picture, the computer device may first determine whether the duty cycle of the mask region in the first picture is greater than a preset threshold. For example, if the duty ratio of the mask region in the first picture is greater than the preset threshold, the picture content corresponding to the mask region in the first picture may be deleted, and the subsequent steps may be performed. For example, if the duty ratio of the mask region in the first picture is less than or equal to the preset threshold, the mask region needs to be randomly generated in the first picture again to ensure that the first picture is effectively and fully modified.
In an embodiment, the computer device may also preset that the ratio of the mask area generated randomly in the first picture needs to be greater than the preset threshold, and then randomly generate mask areas with different positions, different sizes, different shapes, and so on, which is not specifically limited in this specification.
For example, the preset threshold corresponding to the duty ratio of the mask area may be 45%, 50%, 60%, etc., which is not specifically limited in this specification, and may be specifically set according to actual requirements and device performance.
In addition, taking the first picture as a square continuous graph as an example, in order to ensure that the second picture modified by the first picture is also a square continuous graph, the mask region cannot cover the four side edge regions (i.e., the regions corresponding to the upper, lower, left and right sides) of the first picture. If the randomly generated mask region covers part or all of the four side edge regions of the first picture, the currently generated mask region needs to be adjusted (for example, the mask region is moved or reduced), or the mask region is directly and randomly generated again, so that the mask region cannot cover the four side edge regions of the first picture, thereby ensuring that edge information (or pattern textures of edges) in the first picture can be reserved, and further ensuring that each second picture obtained later is also a square continuous picture.
In an embodiment, the computer device may also preset that the randomly generated mask area cannot cover the four side edge areas of the first picture, and then randomly generate mask areas with different positions, different sizes, different shapes, and the like on the basis, which is not specifically limited in this specification.
In an illustrated embodiment, when randomly generating the mask region, the method specifically may include: a black and white mask is randomly generated on the first picture using noise. The white region of the black-and-white mask represents an image deletion region (i.e., a mask region in this specification), and pixel values in the white region are all 0. The black area of the black-and-white mask represents an image retention area (i.e., other area than the mask area in the present specification), and the pixel values in the black area are all 255.
Accordingly, if the duty ratio of the white area in the whole black-and-white mask is greater than a preset threshold (for example, greater than 50%), the black-and-white mask may be applied to the first picture to delete the picture content corresponding to the white area in the first picture, otherwise, the black-and-white mask needs to be randomly generated again.
Accordingly, if the white area covers part or all of the four side edge areas of the first picture, the white area may be appropriately adjusted, for example, reduced, or moved so that the black area may cover the four side edge areas of the first picture, etc., which is not specifically limited in this specification.
In some possible embodiments, the image retention area may also be represented by a white area, the image deletion area may be represented by a black area, where the pixel values in the white area are all 255, the pixel values in the black area are all 0, and so on, which is not specifically limited in this specification.
For example, referring to fig. 3, fig. 3 is a schematic diagram of a picture processing flow according to an exemplary embodiment. The first picture obtained by the computer device may be as shown in fig. 3, where the first picture may include a corresponding pattern texture, and the first picture may be a square continuous graph.
Further, the computer device may randomly generate the mask region on the first picture. As shown in fig. 3, the computer device may randomly generate the mask area a on the first picture for the first time, may randomly generate the mask area B on the first picture (the first picture at this time may be a picture copy of the original first picture) for the second time, may randomly generate the mask area C on the first picture (the first picture at this time may be a picture copy of the original first picture) for the third time, and so on. Wherein the shape, size, position, etc. of each randomly generated mask region may be different. Illustratively, as shown in fig. 3, the shape of the mask region a may be an irregular pattern, the shape of the mask region B may be a circle, the shape of the mask region C may be a rectangle, etc., which is not particularly limited in this specification.
Further, after randomly generating the mask region, the computer device may delete the picture content corresponding to the mask region in the first picture, to obtain a defective first picture. For example, as shown in fig. 3, for the first time, the computer device may delete the picture content corresponding to the mask area a from the first picture, to obtain a malformed first picture a; the computer equipment can delete the picture content corresponding to the mask area B from the first picture for the second time to obtain a incomplete first picture B; third time, the computer device may delete the picture content corresponding to the mask region C in the first picture, obtain the incomplete first picture C, and so on.
Further, after deleting the picture content corresponding to the mask region in the first picture to obtain the incomplete first picture, the computer device may regenerate the picture content corresponding to the mask region to obtain the second picture. For example, as shown in fig. 3, for the first time, the computer device may regenerate the picture content a corresponding to the mask area a in the incomplete first picture a, thereby obtaining a second picture a; the computer equipment can regenerate the picture content B corresponding to the mask area B in the incomplete first picture B for the second time, so as to obtain a second picture B; third time, the computer device may regenerate the picture content C corresponding to the mask area C in the incomplete first picture C, thereby obtaining the second picture C.
As shown in fig. 3, the pattern textures in each of the regenerated picture content a, picture content B, and picture content C may be different.
As shown in fig. 3, each of the obtained second pictures a, B, and C may be square continuous pictures. Referring to fig. 4, fig. 4 is a schematic view illustrating a splicing effect of a square continuous graph according to an exemplary embodiment. As shown in fig. 4, the pattern textures in the image obtained by repeatedly splicing the first picture and the plurality of newly generated second pictures in the up, down, left and right directions are continuous.
The specific implementation manner of regenerating the picture content corresponding to the mask region to obtain the second picture is not particularly limited.
In an embodiment, the present application may manually draw the picture content corresponding to each mask region through various repair software (e.g., photoshop) to repair each incomplete first picture, so as to obtain a second picture a, a second picture B, a second picture C, etc. shown in fig. 3.
In the illustrated embodiment, the present application may also obtain the second picture a, the second picture B, the second picture C, and the like shown in fig. 3 by inputting the first pictures of the respective defects into the image generation model obtained by training in advance, and regenerating the picture content corresponding to the mask region by the image generation model.
In an illustrated embodiment, the image generation model may be a Diffusion model (e.g., stable Diffusion). Wherein the diffusion model is actually a deep learning model. In an embodiment shown, the image generation model may also be a pre-trained model for generating pictures based on input pictures, i.e. a large model, etc., which is not particularly limited in this specification.
It should be understood that the pattern textures in the pictures shown in fig. 3 and fig. 4 are only exemplary, and in the practical application process, the first picture, the second picture, etc. may include various pattern textures corresponding to rock, gravel, wood board, asphalt, cobble, grassland, etc., which is not limited in this specification.
In addition, in an embodiment, before the mask region is generated on the first picture, the present application may further perform a shadow removal process on the first picture, so as to avoid having an obvious shadow effect in the picture, and further affect a subsequent texture map generating effect. For example, the pre-trained shadow removal model may be stored in the computer device, and after obtaining the first picture, the computer device may input the first picture into the shadow removal model to perform the shadow removal process. Further, the computer device may execute the picture processing flow shown in fig. 3 described above with respect to the first picture after the shadow removal processing, and the like, which is not specifically limited in this specification.
In an embodiment shown, the shadow removal model may in fact also be a deep learning model, as this description does not limit in any way.
Step S203, generating a plurality of corresponding material description files based on the plurality of second pictures; each of the plurality of texture description files includes a plurality of texture maps for describing different texture properties of a surface texture of the object.
In an illustrated embodiment, after obtaining a preset number of second pictures based on multiple modifications to the first picture, the computer device may generate a corresponding plurality of texture description files based on the second pictures.
Each of the plurality of texture description files may include a plurality of texture maps for describing different texture properties of a surface texture of the object.
Illustratively, the plurality of texture maps may include any combination of the plurality of texture maps shown below: a map (e.g., a color map) for describing the texture of the pattern of the object surface; a map (e.g., a normal map) for describing a normal texture of the object surface; a map (e.g., a displacement map) for describing the relief texture of the object surface; a map (e.g., an AO map) for describing a shadow texture of a surface of an object; a map for describing a roughness texture of the surface of the object (e.g., roughness map), etc., which is not particularly limited in this specification.
Further, in an illustrated embodiment, the computer device may render the object surface (e.g., wall surface, ground surface, etc.) based on the newly generated plurality of material description files, ultimately enabling the construction of the entire three-dimensional scene. Or in an illustrated embodiment, the computer device may also render the object surface based on the original target material description file and the newly generated plurality of material description files. By way of example, the three-dimensional scene may be a forest, park, school, house interior, etc., as not specifically limited in this specification.
It should be noted that, the specific implementation manner of generating the plurality of material description files based on the plurality of second pictures is not particularly limited in the present application.
In an illustrated embodiment, the computer device may input the plurality of second pictures into a pre-trained texture map generation model, and generate a plurality of texture description files corresponding to the plurality of second pictures through the texture map generation model.
The specific type of the texture map generation model is not particularly limited in the present application. In an illustrated embodiment, the texture map generation model may be a deep learning model.
In an illustrated embodiment, the training process of the texture map generation model may include the following steps.
Firstly, the application can firstly acquire a training sample set, and the training sample set can comprise a plurality of training pictures. Each training picture in the plurality of training pictures can be a picture rendered based on the corresponding material description file and a preset illumination parameter. Based on the method, each material description file can be used as a label of the training picture rendered by the material description file.
In an embodiment, the illumination parameters may include illumination intensity and illumination direction, and the like, which are not particularly limited in this specification.
Then, the application can carry out supervised training on the deep learning model based on each training picture and the corresponding label thereof in the training sample set until the difference between the material description file output by the deep learning model aiming at each input training picture and the corresponding label thereof reaches the expectation, and the training is finished, thus obtaining the material map generating model.
In an embodiment, the first picture and the second picture may be pictures of orthogonal views that do not include perspective information. By way of example, the perspective information may include, for example, relief information of an object (e.g., rock, plank, carpet, etc.). Correspondingly, the trained texture map generation model can be used for estimating perspective information in a plurality of second pictures. Accordingly, each training picture in the training sample set may be an orthogonal picture that does not include perspective information, and the like, which is not specifically limited in this specification.
In summary, the present application may first obtain a first picture including a pattern texture. And randomly generating a mask region in the first picture, deleting picture contents corresponding to the mask region in the first picture, and further regenerating the picture contents corresponding to the mask region through an image generation model to obtain second pictures, and repeatedly executing the processes until a plurality of second pictures with preset quantity are obtained. Finally, the application can generate a plurality of corresponding material description files based on the plurality of second pictures. Wherein each of the plurality of texture description files may include a plurality of texture maps for describing different texture properties of the surface texture of the object. Therefore, the application can obtain a plurality of different pictures based on the existing picture containing pattern textures by randomly modifying part of the picture content for a plurality of times, and further generate a plurality of corresponding material description files based on the plurality of different pictures to render the surface of the object, thereby greatly improving the authenticity of the object and ensuring the overall effect of the three-dimensional scene.
Corresponding to the implementation of the method flow, the embodiment of the specification also provides a material map generating device. Referring to fig. 5, fig. 5 is a schematic structural diagram of a texture map generating apparatus according to an exemplary embodiment. The apparatus 30 may be applied to a server providing a data transmission service. As shown in fig. 5, the apparatus 30 includes:
an acquiring unit 301, configured to acquire a first picture including a pattern texture;
A picture generation unit 302, configured to randomly generate a mask region in the first picture, delete picture content corresponding to the mask region in the first picture, and regenerate, through an image generation model, picture content corresponding to the mask region to obtain a second picture, and repeatedly execute the above process until a preset number of second pictures are obtained;
A texture map generating unit 303, configured to generate a plurality of corresponding texture description files based on the plurality of second pictures; each of the plurality of texture description files includes a plurality of texture maps for describing different texture properties of a surface texture of the object.
In an illustrated embodiment, the first picture and the plurality of second pictures are each tetragonal continuous views.
In an illustrated embodiment, the picture generation unit 302 is specifically configured to:
Randomly generating a mask region in the first picture, and determining whether the duty ratio of the mask region in the first picture is larger than a preset threshold value;
If yes, deleting the picture content corresponding to the mask region in the first picture; otherwise, randomly generating a mask region in the first picture again.
In an illustrated embodiment, the mask region does not cover four side edge regions in the first picture.
In an illustrated embodiment, the texture map generating unit 303 is specifically configured to:
Respectively inputting the plurality of second pictures into a pre-trained material map generation model to generate a plurality of material description files corresponding to the plurality of second pictures; wherein the texture map generation model is a deep learning model.
In an illustrated embodiment, the image generation model includes a diffusion model; or a pre-trained model for generating pictures based on the input pictures.
In one illustrated embodiment, the plurality of texture maps includes any combination of the plurality of texture maps illustrated below:
a map for describing a pattern texture of the object surface;
A map for describing a normal texture of a surface of an object;
a map for describing the relief texture of the object surface;
A map for describing a shadow texture of a surface of an object;
a map for describing the roughness texture of the surface of an object.
The implementation process of the functions and roles of the units in the above device 30 is specifically described in the above corresponding embodiments of fig. 1 to fig. 4, and will not be described in detail herein. It should be understood that the above-mentioned apparatus 30 may be implemented by software, or may be implemented by hardware or a combination of hardware and software. Taking software implementation as an example, the device in a logic sense is formed by reading corresponding computer program instructions into a memory by a processor (CPU) of the device. In addition to the CPU and the memory, the device in which the above apparatus is located generally includes other hardware such as a chip for performing wireless signal transmission and reception, and/or other hardware such as a board for implementing a network communication function.
The apparatus embodiments described above are merely illustrative, wherein the elements illustrated as separate elements may or may not be physically separate, and the elements shown as elements may or may not be physical modules, i.e., may be located in one place, or may be distributed over a plurality of network modules. Some or all of the units or modules may be selected according to actual needs to achieve the purposes of the present description. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
The apparatus, units, modules illustrated in the above embodiments may be implemented in particular by a computer chip or entity or by a product having a certain function. A typical implementation device is a computer, which may be in the form of a personal computer, laptop computer, cellular telephone, camera phone, smart phone, personal digital assistant, media player, navigation device, email device, game console, tablet computer, wearable device, or a combination of any of these devices.
Corresponding to the method embodiments described above, embodiments of the present disclosure also provide a computer device. Referring to fig. 6, fig. 6 is a schematic structural diagram of a computer device according to an exemplary embodiment. As shown in fig. 6, the computer device may include a processor 1001 and memory 1002, and further may include an input device 1004 (e.g., keyboard, etc.) and an output device 1005 (e.g., display, etc.). The processor 1001, memory 1002, input devices 1004, and output devices 1005 may be connected by a bus or other means. As shown in fig. 6, the memory 1002 includes a computer-readable storage medium 1003, which computer-readable storage medium 1003 stores a computer program executable by the processor 1001. The processor 1001 may be a general purpose processor, a microprocessor, or an integrated circuit for controlling the execution of the above method embodiments. The processor 1001 may execute the steps of the method for generating a texture map in the embodiment of the present specification when executing the stored computer program, including: acquiring a first picture containing pattern textures; randomly generating a mask region in the first picture, deleting picture contents corresponding to the mask region in the first picture, regenerating the picture contents corresponding to the mask region through an image generation model to obtain a second picture, and repeatedly executing the processes until a plurality of second pictures with preset numbers are obtained; generating a plurality of corresponding material description files based on the plurality of second pictures; wherein each of the plurality of texture description files includes a plurality of texture maps for describing different texture properties of a surface texture of the object, and so on.
For a detailed description of each step of the above material map generating method, please refer to the previous contents, and no further description is given here.
Corresponding to the above-described method embodiments, embodiments of the present description also provide a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method for generating a texture map in the embodiments of the present description. Please refer to the above description of the corresponding embodiments of fig. 1-4, and detailed descriptions thereof are omitted herein.
The foregoing description of the preferred embodiments is provided for the purpose of illustration only, and is not intended to limit the scope of the disclosure, since any modifications, equivalents, improvements, etc. that fall within the spirit and principles of the disclosure are intended to be included within the scope of the disclosure.
In a typical configuration, the terminal device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data.
Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
It will be appreciated by those skilled in the art that embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, embodiments of the present specification may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Moreover, embodiments of the present description may take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
Claims (10)
1. A method of generating a texture map, the method comprising:
acquiring a first picture containing pattern textures;
Randomly generating a mask region in the first picture, deleting picture contents corresponding to the mask region in the first picture, regenerating the picture contents corresponding to the mask region through an image generation model to obtain a second picture, and repeatedly executing the processes on the acquired first picture until a plurality of second pictures with preset quantity are obtained;
Generating a plurality of corresponding material description files based on the plurality of second pictures; each of the plurality of texture description files includes a plurality of texture maps for describing different texture properties of a surface texture of the object.
2. The method of claim 1, wherein the first picture and the plurality of second pictures are each tetragonal continuous pictures.
3. The method according to claim 1, wherein randomly generating a mask region in the first picture, deleting picture content in the first picture corresponding to the mask region, comprises:
Randomly generating a mask region in the first picture, and determining whether the duty ratio of the mask region in the first picture is larger than a preset threshold value;
If yes, deleting the picture content corresponding to the mask region in the first picture; otherwise, randomly generating a mask region in the first picture again.
4. The method of claim 1, wherein the mask region does not cover four side edge regions in the first picture.
5. The method of claim 1, wherein generating a corresponding plurality of texture description files based on the plurality of second pictures comprises:
Respectively inputting the plurality of second pictures into a pre-trained material map generation model to generate a plurality of material description files corresponding to the plurality of second pictures; wherein the texture map generation model is a deep learning model.
6. The method of claim 1, wherein the image generation model comprises a diffusion model; or a pre-trained model for generating pictures based on the input pictures.
7. The method of any of claims 1-6, wherein the plurality of texture maps comprises a combination of any of the plurality of texture maps shown below:
a map for describing a pattern texture of the object surface;
A map for describing a normal texture of a surface of an object;
a map for describing the relief texture of the object surface;
A map for describing a shadow texture of a surface of an object;
a map for describing the roughness texture of the surface of an object.
8. A texture map generation apparatus, the apparatus comprising:
an acquisition unit, configured to acquire a first picture including a pattern texture;
The image generation unit is used for randomly generating a mask area in the first image, deleting the image content corresponding to the mask area in the first image, regenerating the image content corresponding to the mask area through an image generation model to obtain a second image, and repeatedly executing the above processes on the acquired first image until a plurality of second images with preset quantity are obtained;
The material map generating unit is used for generating a plurality of corresponding material description files based on the plurality of second pictures; each of the plurality of texture description files includes a plurality of texture maps for describing different texture properties of a surface texture of the object.
9. A computer device, comprising: a memory and a processor; the memory has stored thereon a computer program executable by the processor; the processor, when running the computer program, performs the method of any one of claims 1 to 7.
10. A computer readable storage medium, characterized in that a computer program is stored thereon, which computer program, when being executed by a processor, implements the method according to any of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311435223.6A CN117456077B (en) | 2023-10-30 | 2023-10-30 | Material map generation method and related equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311435223.6A CN117456077B (en) | 2023-10-30 | 2023-10-30 | Material map generation method and related equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN117456077A CN117456077A (en) | 2024-01-26 |
CN117456077B true CN117456077B (en) | 2024-10-01 |
Family
ID=89583207
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311435223.6A Active CN117456077B (en) | 2023-10-30 | 2023-10-30 | Material map generation method and related equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117456077B (en) |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117456076A (en) * | 2023-10-30 | 2024-01-26 | 神力视界(深圳)文化科技有限公司 | Material map generation method and related equipment |
Family Cites Families (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100226582B1 (en) * | 1996-12-24 | 1999-10-15 | 전주범 | Target tracking method and device for video phone |
CN111260548B (en) * | 2018-11-30 | 2023-07-21 | 浙江宇视科技有限公司 | Mapping method and device based on deep learning |
WO2021080158A1 (en) * | 2019-10-25 | 2021-04-29 | Samsung Electronics Co., Ltd. | Image processing method, apparatus, electronic device and computer readable storage medium |
CN110992322A (en) * | 2019-11-25 | 2020-04-10 | 创新奇智(青岛)科技有限公司 | Patch mask detection system and detection method based on convolutional neural network |
CN113033573A (en) * | 2021-03-16 | 2021-06-25 | 佛山市南海区广工大数控装备协同创新研究院 | Method for improving detection performance of instance segmentation model based on data enhancement |
CN112927339A (en) * | 2021-04-01 | 2021-06-08 | 腾讯科技(深圳)有限公司 | Graphic rendering method and device, storage medium and electronic equipment |
CN113181639B (en) * | 2021-04-28 | 2024-06-04 | 网易(杭州)网络有限公司 | Graphic processing method and device in game |
CN113344942B (en) * | 2021-05-21 | 2024-04-02 | 深圳瀚维智能医疗科技有限公司 | Human body massage region segmentation method, device and system and computer storage medium |
CN113608805B (en) * | 2021-07-08 | 2024-04-12 | 阿里巴巴创新公司 | Mask prediction method, image processing method, display method and device |
CN116266373A (en) * | 2021-12-17 | 2023-06-20 | Oppo广东移动通信有限公司 | Image processing method and device, electronic equipment and storage medium |
CN115661912B (en) * | 2022-12-26 | 2024-04-12 | 荣耀终端有限公司 | Image processing method, model training method, electronic device, and readable storage medium |
CN116485973A (en) * | 2023-04-04 | 2023-07-25 | 阿里巴巴达摩院(杭州)科技有限公司 | Material generation method of virtual object, electronic equipment and storage medium |
CN116912387A (en) * | 2023-04-11 | 2023-10-20 | 网易(杭州)网络有限公司 | Texture map processing method and device, electronic equipment and storage medium |
CN116796027A (en) * | 2023-06-30 | 2023-09-22 | 广州商研网络科技有限公司 | Commodity picture label generation method and device, equipment, medium and product thereof |
-
2023
- 2023-10-30 CN CN202311435223.6A patent/CN117456077B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117456076A (en) * | 2023-10-30 | 2024-01-26 | 神力视界(深圳)文化科技有限公司 | Material map generation method and related equipment |
Also Published As
Publication number | Publication date |
---|---|
CN117456077A (en) | 2024-01-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20230053462A1 (en) | Image rendering method and apparatus, device, medium, and computer program product | |
CN109771951B (en) | Game map generation method, device, storage medium and electronic equipment | |
CN113674389B (en) | Scene rendering method and device, electronic equipment and storage medium | |
CN115830208B (en) | Global illumination rendering method, device, computer equipment and storage medium | |
CN111127623A (en) | Model rendering method and device, storage medium and terminal | |
CN112169324A (en) | Rendering method, device and equipment of game scene | |
US20240005592A1 (en) | Image rendering method and apparatus, device, and storage medium | |
WO2022063260A1 (en) | Rendering method and apparatus, and device | |
CN113658316B (en) | Rendering method and device of three-dimensional model, storage medium and computer equipment | |
CN110930497B (en) | Global illumination intersection acceleration method and device and computer storage medium | |
JP2023519728A (en) | 2D image 3D conversion method, apparatus, equipment, and computer program | |
CN116228960A (en) | Construction method and construction system of virtual museum display system and display system | |
WO2023098358A1 (en) | Model rendering method and apparatus, computer device, and storage medium | |
US10861218B2 (en) | Methods and systems for volumetric reconstruction based on a confidence field | |
CN117456076A (en) | Material map generation method and related equipment | |
CN114820980A (en) | Three-dimensional reconstruction method and device, electronic equipment and readable storage medium | |
CN117456077B (en) | Material map generation method and related equipment | |
CN112473135B (en) | Real-time illumination simulation method, device and equipment for mobile game and storage medium | |
CN111506680B (en) | Terrain data generation and rendering method and device, medium, server and terminal | |
KR20230022153A (en) | Single-image 3D photo with soft layering and depth-aware restoration | |
CN116824082B (en) | Virtual terrain rendering method, device, equipment, storage medium and program product | |
US20190371049A1 (en) | Transform-based shadowing of object sets | |
US20240193864A1 (en) | Method for 3d visualization of sensor data | |
CN114119925B (en) | Game image modeling method and device and electronic equipment | |
CN118628638A (en) | Shadow rendering method, shadow rendering device, computer equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |