CN112396688A - Three-dimensional virtual scene generation method and device - Google Patents
Three-dimensional virtual scene generation method and device Download PDFInfo
- Publication number
- CN112396688A CN112396688A CN201910747186.XA CN201910747186A CN112396688A CN 112396688 A CN112396688 A CN 112396688A CN 201910747186 A CN201910747186 A CN 201910747186A CN 112396688 A CN112396688 A CN 112396688A
- Authority
- CN
- China
- Prior art keywords
- dimensional virtual
- target
- article
- placing
- object model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Processing Or Creating Images (AREA)
Abstract
The application provides a method and a device for generating a three-dimensional virtual scene, wherein the method comprises the following steps: generating a corresponding three-dimensional virtual environment model for the target area; generating a corresponding three-dimensional virtual object model for each target object to be placed in the target area; and placing a three-dimensional virtual object model corresponding to each target object in the three-dimensional virtual environment model based on the set placement rule of each target object in the target area to form a three-dimensional virtual scene. By the method, the three-dimensional virtual scene which meets the user expectation can be automatically built even if no real scene exists.
Description
Technical Field
The present application relates to the field of three-dimensional modeling technologies, and in particular, to a method and an apparatus for generating a three-dimensional virtual scene.
Background
At present, with the rapid development of three-dimensional modeling technology, a three-dimensional virtual scene vividly and vividly reproduces a real scene, so that the three-dimensional virtual scene brings the advantage of good visual experience to users, is widely applied to various industries, and on the basis, more and more attention and research are paid to how to accurately and efficiently construct the three-dimensional virtual scene.
The existing three-dimensional modeling software such as 3D Max three-dimensional modeling software is based on a real scene, that is, under the premise that the real scene exists, a three-dimensional virtual scene approximately consistent with the real scene is simulated again based on the real scene. This limits the application of current three-dimensional virtual scenes.
Disclosure of Invention
In view of this, the present application provides a method and an apparatus for generating a three-dimensional virtual scene, so as to generate the three-dimensional virtual scene.
According to a first aspect of embodiments of the present application, a method for generating a three-dimensional virtual scene is provided, where the method includes:
generating a corresponding three-dimensional virtual environment model for the target area;
generating a corresponding three-dimensional virtual object model for each target object to be placed in the target area;
and placing a three-dimensional virtual object model corresponding to each target object in the three-dimensional virtual environment model based on the set placement rule of each target object in the target area to form a three-dimensional virtual scene.
According to a second aspect of the embodiments of the present application, there is provided an apparatus for generating a three-dimensional virtual scene, the apparatus including:
the first generation module is used for generating a corresponding three-dimensional virtual environment model for the target area;
the second generation module is used for generating a corresponding three-dimensional virtual object model for each target object to be placed in the target area;
and the third generation module is used for placing the three-dimensional virtual object model corresponding to each target object in the three-dimensional virtual environment model based on the set placing rule of each target object in the target area to form a three-dimensional virtual scene.
According to a third aspect of embodiments herein, there is provided an electronic device, the device comprising a readable storage medium and a processor;
wherein the readable storage medium is configured to store machine executable instructions;
the processor is configured to read the machine executable instructions on the readable storage medium, and execute the instructions to implement the steps of the method for generating a three-dimensional virtual scene provided in the embodiment of the present application.
According to a fourth aspect of the embodiments of the present application, a computer-readable storage medium is provided, where a computer program is stored in the computer-readable storage medium, and when executed by a processor, the computer program implements the steps of the method for generating a three-dimensional virtual scene provided in the embodiments of the present application.
By applying the embodiment of the application, the three-dimensional virtual environment model corresponding to the target area and the three-dimensional virtual object model corresponding to the target object are constructed, the three-dimensional virtual object model corresponding to each target object is placed in the three-dimensional virtual environment model based on the set placement rule of each target object in the target area, and the three-dimensional virtual scene is formed.
Drawings
Fig. 1 is a flowchart of an embodiment of a method for generating a three-dimensional virtual scene according to an exemplary embodiment of the present application;
fig. 2 is a flowchart of an embodiment of a method for generating a three-dimensional virtual scene according to an exemplary embodiment of the present application;
FIG. 3 is a schematic diagram of the effect of placing the three-dimensional virtual object models corresponding to the shelves in the three-dimensional virtual environment model;
FIG. 4 is a schematic diagram of a three-dimensional coordinate system established for an article placement area;
FIG. 5 is an example of a target image;
FIG. 6 is a flowchart illustrating an embodiment of a process for generating a target image according to an exemplary embodiment of the present application;
FIG. 7 is an example of camera parameters for a virtual camera;
FIG. 8 is another example of a target image;
FIG. 9 is a flowchart of an embodiment of a process for labeling location information of an item in a target image according to an exemplary embodiment of the present application;
FIG. 10 illustrates an example of a simple target image for ease of explanation, with the positional information of an item in the target image labeled;
FIG. 11 is an example of a target image with annotation information;
fig. 12 is a block diagram of an embodiment of a device for generating a three-dimensional virtual scene according to an exemplary embodiment of the present application;
fig. 13 is a hardware block diagram of an electronic device according to an exemplary embodiment of the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
In order to solve the above problems, the present application provides a method for generating a three-dimensional virtual scene, in which a three-dimensional virtual scene meeting the user's expectations can be automatically constructed without the premise of a real scene even if no real scene exists. The following examples are presented to illustrate the process in detail as follows:
first, a method for generating a three-dimensional virtual scene proposed in the present application will be described as a whole by showing a first embodiment:
the first embodiment,
Referring to fig. 1, a flowchart of an embodiment of a method for generating a three-dimensional virtual scene according to an exemplary embodiment of the present application is provided, where the method includes the following steps:
step 101: and generating a corresponding three-dimensional virtual environment model for the target area.
In the embodiment of the present application, three-dimensional modeling software, such as 3D MAX, Maya, etc., may be used to generate a corresponding three-dimensional model for a target area according to parameters such as a two-dimensional plan view, spatial data, spatial features, etc. of the target area, and for convenience of description, the three-dimensional model is referred to as a three-dimensional virtual environment model.
As for a specific process of generating the three-dimensional virtual environment model, those skilled in the art may refer to the related description in the prior art, and details are not described herein again.
Step 102: and generating a corresponding three-dimensional virtual object model for each target object to be placed in the target area.
In the embodiment of the present application, each target object to be placed in the target area, for example, each type of article, may be predefined, and a corresponding three-dimensional model may be generated for each target object, and for convenience of description, the three-dimensional model is referred to as a three-dimensional virtual object model.
As one example, a corresponding three-dimensional virtual object model may be generated for each target object using three-dimensional scanning techniques. Specifically, for a certain target object as an example, the target object may be scanned by a three-dimensional scanner to obtain parameters such as spatial data and spatial features of the target object, and then a three-dimensional virtual object model of the target object may be generated according to the parameters such as the spatial data and the spatial features obtained by the scanning by using three-dimensional modeling software.
It should be noted that the above-described specific implementation manner of generating a corresponding three-dimensional virtual object model for each target object is merely an exemplary example, and in practical applications, the corresponding three-dimensional virtual object model may also be generated for each target object in other manners, which is not limited in this application.
Step 103: and placing the three-dimensional virtual object model corresponding to each target object in the three-dimensional virtual environment model based on the set placement rule of each target object in the target area to form a three-dimensional virtual scene model.
In this embodiment of the present application, based on a set placement rule of each target object in the target region, the three-dimensional virtual object model corresponding to each target object may be placed in the three-dimensional virtual environment model, and for convenience of description, the three-dimensional model obtained through the processing is referred to as a three-dimensional virtual scene model. And then rendering the three-dimensional virtual scene model to form a three-dimensional virtual scene.
As one example, different types of target objects may differ in placement rules in the target area.
As an example, for a three-dimensional virtual environment model corresponding to a target area, a rendering texture corresponding to the three-dimensional virtual environment model may be determined, and for a three-dimensional virtual object model corresponding to each target object, a rendering texture corresponding to the three-dimensional virtual object model may be determined, and then, for a three-dimensional virtual scene model, for the three-dimensional virtual environment model, the three-dimensional virtual environment model may be rendered according to the rendering texture corresponding to the three-dimensional virtual environment model, and for each three-dimensional virtual object model, the three-dimensional virtual object model may be rendered according to the rendering texture corresponding to the three-dimensional virtual object model.
It should be noted that, in the three-dimensional virtual scene, there is an overlapping region between the three-dimensional virtual environment model and the three-dimensional virtual object model, so when rendering the three-dimensional virtual environment model, rendering may be performed only on regions of the three-dimensional virtual environment model other than the three-dimensional virtual object model. By this processing, texture conflicts resulting from repeated rendering can be avoided.
As one example, the target area and each target object may be scanned with a three-dimensional scanner to determine a three-dimensional virtual environment model and a rendering texture corresponding to each of the three-dimensional virtual object models.
As another example, a three-dimensional virtual environment model and a rendering texture corresponding to each of the three-dimensional virtual object models may be determined from the two-dimensional images of the target area and each of the target objects.
It should be noted that the specific implementation manner of determining the rendering textures corresponding to the three-dimensional virtual environment model and the three-dimensional virtual object model described above is merely an exemplary example, and in practical applications, the rendering textures corresponding to the three-dimensional virtual environment model and the three-dimensional virtual object model may also be determined in other manners, which is not limited in this application.
It can be seen from the above embodiments that, by constructing a three-dimensional virtual environment model corresponding to a target area and a three-dimensional virtual object model corresponding to a target object, and based on a set placement rule of each target object in the target area, the three-dimensional virtual object model corresponding to each target object is placed in the three-dimensional virtual environment model to form a three-dimensional virtual scene, such a manner of generating the three-dimensional virtual scene does not need to be based on a real scene, and even if there is no real scene, the three-dimensional virtual scene meeting the user's expectations can be automatically constructed.
The description of the first embodiment is completed.
Secondly, the following second embodiment is shown, and the method for generating the three-dimensional virtual scene provided by the present application is further explained by taking the automatic construction of the three-dimensional virtual scene of the supermarket as an example:
example II,
Referring to fig. 2, a flowchart of another embodiment of a method for generating a three-dimensional virtual scene according to an exemplary embodiment of the present application is provided, where the method includes the following steps:
step 201: and generating a corresponding three-dimensional virtual environment model for the target area.
For a detailed description of this step, reference may be made to the related description of step 101 in the first embodiment, and details are not repeated here.
Step 202: and generating a corresponding three-dimensional virtual object model for each shelf to be placed in the target area, and generating a corresponding three-dimensional virtual object model for each article to be placed in the target area.
Under the application scene of automatically building a three-dimensional virtual scene of a supermarket, the target object at least can comprise a shelf and articles.
As one example, the target object may include at least multiple types of shelves corresponding to a real-world scene, where the shelves of different types are different in size, shape, and number of layers; accordingly, the target object may include at least a plurality of types of articles, wherein the different types of articles are different in size and shape, such as bottled beverages, canned beverages, bagged foods, various types of living goods, and the like.
For a specific process of generating a corresponding three-dimensional virtual object model for each shelf to be placed in the target area and generating a corresponding three-dimensional virtual object model for each item to be placed in the target area, reference may be made to the related description of step 102 in the above embodiment one, and details are not repeated here.
Further, as an example, a supermarket database may be established in advance, and the supermarket database may be used for storing shelf information, article information, price tag information, and the like. Wherein the shelf information may include: the type, name, size, layer number of the goods shelf, the size of the accommodating space of each layer, a three-dimensional virtual object model corresponding to the goods shelf and the like; the item information may include: the type, name, size, price, shelf life of the article, a three-dimensional virtual object model corresponding to the article and the like; the price tag information may include: the type, the size, the price, the three-dimensional virtual object model corresponding to the price tag and the like of the price tag.
Based on the example, the user may perform operations such as "add, delete, check, and modify" on the supermarket database, for example, when the user gets a new article in the supermarket, the user may enter the information of the article into the supermarket database, and when the user gets out of the supermarket and changes shelves, the user may delete the information of the shelves to be eliminated from the supermarket database and enter the information of the new shelves.
Based on the example, in the process of automatically building the three-dimensional virtual scene model of the supermarket, the three-dimensional virtual object models corresponding to the goods shelf and the goods can be obtained from the supermarket database.
Step 203: and placing the three-dimensional virtual object model corresponding to each shelf in the three-dimensional virtual environment model according to the set shelf placing rule.
In the embodiment of the present application, a shelf placement rule may be configured in advance, and the shelf placement rule may be used to define a placement direction of a shelf in a supermarket, a distance interval between adjacent shelves, the number of shelves, and the like.
As an example, different types of shelves may correspond to different shelf placement rules, e.g., a shelf of a certain type may have only one side on which items may be placed, and a shelf of that type may be placed against a wall, while a shelf of another type may have both sides on which items may be placed, and a shelf of that type may not be placed against a wall.
As another example, a uniform shelf placement rule may be pre-configured, and the shelf placement rule may be used to define the type of shelves placed in the supermarket, the number of shelves of each type, the distance between adjacent shelves, the placement direction of shelves of each type in the supermarket, and the like.
In the embodiment of the application, the three-dimensional virtual object model corresponding to each shelf can be placed in the three-dimensional virtual environment model according to the set shelf placement rule and the two-dimensional plane diagram of the supermarket.
Taking the above unified shelf placement rule as an example, assume that the unified shelf placement rule indicates: all types of goods shelves are placed in the supermarket according to the north-south direction, the distance interval between every two adjacent goods shelves is 1 meter, and 1A type goods shelf, 2B type goods shelves and 2C type goods shelves are sequentially placed according to the sequence from the west to the east. Then, based on the shelf placement rule, the effect diagram after placing the three-dimensional virtual object model of each shelf object in the three-dimensional virtual environment model can be as shown in fig. 3.
Step 204: and placing the three-dimensional virtual object model corresponding to each article in the three-dimensional virtual object model corresponding to each shelf according to the set article placement rule.
In this step, first, an article placement area in the three-dimensional virtual object model corresponding to each shelf is identified, then, for each article placement area, a target article to be placed in the article placement area is determined, and finally, the three-dimensional virtual object model corresponding to the target article is placed in the article placement area according to a set article placement rule.
As one example, a target item to be placed may be specified by a user for each item placement area.
As another example, the target item to be placed may be automatically determined for each item placement area. In one example, taking a certain article placement area as an example, the size information of the virtual accommodation space of the article placement area may be matched with the size information of the three-dimensional virtual object model corresponding to each article, and the target article to be placed in the article placement area may be determined according to the matching result. For example, assuming that the height of the article placement area is 20cm, the height of the three-dimensional virtual object model corresponding to bottled cola is 25cm, and the height of the three-dimensional virtual object model corresponding to barreled instant noodles is 15cm, the barreled instant noodles can be determined as the target article to be placed in the article placement area.
As an example, the above-mentioned article placement rules may be pre-configured, and may be used to define the placement direction of the articles on the shelf, the distance interval between adjacent articles on the shelf, the number of articles placed on each layer of the shelf, and so on.
As an example, different types of articles may correspond to different article placement rules, e.g., one type of article may not be stacked in a vertical direction, such as a bottled beverage, while another type of article may be stacked in a vertical direction, such as an article having an outer package shape that approximates a rectangular parallelepiped.
As another example, at least one item display diagram may be generated according to the size information of the virtual accommodation space of the item placement area and the size information of the three-dimensional virtual object model corresponding to the target item, and the at least one item display diagram may be output. And then, acquiring a target article display diagram selected by the user, and generating an article placement rule according to the target article display diagram.
As an example, the specific process of "placing the three-dimensional virtual object model corresponding to the target item in the item placement area according to the set item placement rule" may include: first, a corresponding three-dimensional space coordinate system is established for the article placement area, for example, as shown in fig. 4, a larger rectangular parallelepiped in fig. 4 represents a virtual accommodation space of the article placement area, in fig. 4, an X-axis direction of the three-dimensional space coordinate system corresponds to a horizontal direction of the article placement area, a Y-axis direction corresponds to a vertical direction of the article placement area, and a Z-axis direction is perpendicular to the X-axis direction and the Y-axis direction. Then, a three-dimensional virtual object model corresponding to the target object is placed on the object placement area from the coordinate origin of the three-dimensional space coordinate system along the X-axis direction, the Y-axis direction and the Z-axis direction of the three-dimensional space coordinate system until a preset object placement condition is met.
As an example, the preset article placing condition may include: the number of the placed articles reaches the preset number, or the article placing area is full.
In an example, when the target object is an object whose outer package shape is approximately a cuboid (for example, a smaller cuboid in fig. 4 represents a three-dimensional virtual object model corresponding to the target object), and the preset object placing condition is that the object placing area is full, the three-dimensional virtual object model corresponding to the target object may be placed on the object placing area according to the following procedure:
firstly, taking a coordinate origin O of a three-dimensional space coordinate system in fig. 4 as a first current point, placing a three-dimensional virtual object model corresponding to a target article at the first current point, after the placing is successful, shifting the first current point by a first distance value S1 along the X-axis direction, wherein S1 is the length of the three-dimensional virtual object model corresponding to the target article, for example, as shown in fig. 4, a point P1 is the first current point after the shifting, and then, judging whether the distance between the first current point and a first designated point is greater than or equal to S1, wherein the coordinate information of the first designated point is (E1, 0, 0), and E1 is the length of a virtual accommodation space of an article placement area, for example, as shown in fig. 4, a point P2 is the first designated point; if so, the article placement area may be considered to be able to continue to accommodate the target article in the X-axis direction, and therefore, the step of placing the three-dimensional virtual object model corresponding to the target article at the first current point may be returned until it is determined that the distance between the first current point and the first specified point is less than S1, the article placement area may be considered to be unable to continue to accommodate the target article in the X-axis direction, and the current flow is ended. At this point, the placement of the target item in the X-axis direction of the item placement area is completed.
And then, taking the second designated point as the second current point, where the coordinate information of the second designated point is (0, S2, 0,), where S2 is the height of the three-dimensional virtual object model corresponding to the target article, for example, as shown in fig. 4, the point P3 is the second designated point, placing the three-dimensional virtual object model corresponding to one target article at the second current point, after successful placement, shifting the second current point in the Y-axis direction by S2, for example, as shown in fig. 4, the point P4 is the second current point after one shift, and then, determining whether the distance between the second current point and the third designated point is greater than or equal to S2, where the coordinate information of the third designated point is (0, E2, 0), E2 is the height of the virtual accommodation space of the article accommodation area, for example, as shown in fig. 4, the point P5 is the third designated point, if so, the article accommodation area can be considered to continue to accommodate the target article in the Y-axis direction, in this way, the process may return to the step of placing the three-dimensional virtual object model corresponding to the one target item at the second current point until it is determined that the distance between the second current point and the third designated point is less than S2, and the current flow may be ended if it is determined that the item placement area cannot continuously accommodate the target item in the Y-axis direction. At this point, the placement of the target item in the Y-axis direction of the item placement area is completed.
And a fourth designated point is taken as the third current point, the coordinate information of the fourth designated point is (0, 0, S3), wherein S3 is the width of the three-dimensional virtual object model corresponding to the target article, for example, as shown in fig. 4, P6 is the fourth designated point, a three-dimensional virtual object model corresponding to the target article is placed at the third current point, after successful placement, the third current point is shifted in the Z-axis direction by S3, for example, as shown in fig. 4, P7 is the third current point after one shift, and then, whether the distance between the third current point and the fifth designated point is greater than or equal to S3 is determined, wherein the coordinate information of the fifth designated point is (0, 0, E3), E3 is the width of the virtual accommodation space of the article accommodation area, for example, as shown in fig. 4, P8 is the fifth designated point, if so, the article area can be considered to continue accommodating the target article in the Z-axis direction, thus, the process may return to the step of placing the three-dimensional virtual object model corresponding to the target item at the third current point until it is determined that the distance between the third current point and the fifth designated point is less than S3, and the current flow may be ended if it is determined that the item placement area cannot continuously accommodate the target item in the Z-axis direction. At this point, the placing of the target item in the Z-axis direction of the item placing area is completed.
Step 205: and determining rendering textures corresponding to the three-dimensional virtual object model aiming at the three-dimensional virtual object model corresponding to each shelf.
Step 206: and determining rendering textures corresponding to the three-dimensional virtual object model aiming at the three-dimensional virtual object model corresponding to each article.
Step 207: in the three-dimensional virtual scene model, aiming at the three-dimensional virtual object model corresponding to each article, rendering is carried out on the three-dimensional virtual object model by utilizing the rendering texture corresponding to the three-dimensional virtual object model.
Step 208: and aiming at the three-dimensional virtual object model corresponding to each shelf, rendering the areas of the three-dimensional virtual object model except the three-dimensional virtual object model corresponding to the articles according to the rendering textures corresponding to the three-dimensional virtual object model.
For detailed descriptions of step 205 to step 208, reference may be made to the related descriptions in step 103 and step 104 in the above first embodiment, and details are not described here again.
It should be noted that the sequence between step 201 and step 208 is merely an example, and in practical applications, there may be other execution sequences that conform to the correct logic, for example, step 204 may be executed first, and then step 203 may be executed, for example, step 207 may be executed synchronously with step 208, or step 207 may be executed first, and then step 208 may be executed. This is not illustrated in any further detail in the present application.
As can be seen from the above embodiments, by constructing a three-dimensional virtual environment model of a supermarket, a three-dimensional virtual object model of each shelf, and a three-dimensional virtual object model of each article, placing the three-dimensional virtual object model of each shelf in the constructed three-dimensional virtual environment model according to a set shelf placement rule, and placing the three-dimensional virtual object model of each article in the three-dimensional virtual object model of the shelf according to the set article placement rule, a three-dimensional virtual scene model of the supermarket is obtained, and then the three-dimensional virtual scene model is colored to generate a three-dimensional virtual scene.
So far, the description of the second embodiment is completed.
In addition, in the present application, after the three-dimensional virtual scene is generated, the three-dimensional virtual scene may be presented to the user.
As an example, the electronic device may control a display process of the three-dimensional virtual scene by interacting with a user, and specifically, the user may control a display angle of the three-dimensional virtual scene by setting a camera parameter of a virtual camera in the three-dimensional virtual scene model, so as to observe the three-dimensional virtual scene at different viewing angles. As will be understood by those skilled in the art, the electronic device presents a three-dimensional virtual scene to a user in the form of a two-dimensional image, and for convenience of description, in the embodiment of the present application, a presentation image of the three-dimensional virtual scene at a certain viewing angle is referred to as a target image, for example, as shown in fig. 5, which is an example of the target image.
As follows, the following third embodiment is shown, and the generation process of the target image is explained:
example III,
Referring to fig. 6, a flowchart of an embodiment of a process for generating a target image according to an exemplary embodiment of the present application includes the following steps:
step 601: and acquiring the camera parameters of the virtual camera in the set three-dimensional virtual scene model.
As an example, the camera parameters may include: pitch (pitch angle), yaw (yaw angle), Scale (distance between camera and object), for example, as shown in fig. 7, is an example of the camera parameters of the virtual camera.
Step 602: and determining a transformation matrix of phase transformation between a world coordinate system and an image coordinate system according to the camera parameters, wherein the world coordinate system is a coordinate system corresponding to the three-dimensional virtual scene.
In the embodiment of the present application, a transformation matrix View for phase transformation between the world coordinate system and the image coordinate system may be determined according to camera parameters, for example, the View may be calculated by the following formula (one):
eye in the above formula (one) can be calculated by the following formula (two); forword can be calculated by the following formula (III); right can be calculated by the following formula (iv); the head can be calculated by the following formula (five).
head forward right formula (V)
Step 603: and converting the coordinate position information of each object in the three-dimensional virtual scene in the world coordinate system into target position information in the image coordinate system by using the conversion matrix.
In the embodiment of the present application, the coordinate position information of each object in the three-dimensional virtual scene in the world coordinate system may be converted into the target position information in the image coordinate system by using the conversion matrix illustrated in the above formula (one), for example, the coordinate position information of each object in the three-dimensional virtual scene in the world coordinate system may be converted into the target position information in the image coordinate system by using the following formula (six):
in the above formula (six), w represents a horizontal axis coordinate value of the object in the image coordinate system, h represents a vertical axis coordinate value of the object in the image coordinate system, (w, h) is the above target position information, and depth represents a distance from the camera; x represents the x-axis coordinate value of the object in the world coordinate system, y represents the y-axis coordinate value of the object in the world coordinate system, z represents the z-axis coordinate value of the object in the world coordinate system, and (x, y, z) is the coordinate position information.
Step 604: and aiming at each object in the three-dimensional virtual scene, mapping the object to a corresponding position on a two-dimensional image configuration surface according to the target position information of the object in the image coordinate system to obtain a target image.
In the embodiment of the present application, for each object in the three-dimensional virtual scene, the object is mapped to a corresponding position on the two-dimensional image configuration plane according to the target position information of the object in the image coordinate system, so as to obtain the target image illustrated in fig. 5.
As can be seen from the above embodiments, a transformation matrix for phase transformation between the world coordinate system and the image coordinate system is determined according to the camera parameters of the virtual camera in the set three-dimensional virtual scene model; converting coordinate position information of each object in the three-dimensional virtual scene in a world coordinate system into target position information in an image coordinate system by using a conversion matrix; the method comprises the steps that for each object in the three-dimensional virtual scene, the object is mapped to the corresponding position on the two-dimensional image configuration surface according to the target position information of the object in the image coordinate system to obtain a target image, and therefore a user can control the display angle of the three-dimensional virtual scene by setting the camera parameters of the virtual camera in the three-dimensional virtual scene model to observe the three-dimensional virtual scene under different visual angles, and user experience is improved.
So far, the description of the third embodiment is completed.
In addition, in some scenarios involving training of the image recognition model, when the image recognition model is trained based on the target image obtained in the third embodiment, the position information, the type information, and/or the quantity information of the object in the target image (as will be understood by those skilled in the art, the object in the target image is not a real object, but a virtual object) may be labeled first, and then the image recognition model may be trained according to at least one of the labeled position information, type information, and quantity information of each object.
A description will be given of the following fourth embodiment of the process of labeling the position information of the three-dimensional virtual object model corresponding to the object in the target image as follows:
first, for ease of understanding, the preconditions for implementing the labeling process are explained:
as an example, in the second embodiment, the rendering texture of the three-dimensional virtual object model corresponding to the article may be a solid color texture, that is, a texture having a single color value, and the rendering textures of different articles have different single color values. Through the processing, different objects can be represented in different colors in the three-dimensional virtual scene, and in the rendering manner, the target image obtained through the third embodiment can be as shown in fig. 8.
Example four,
Referring to fig. 9, a flowchart of an embodiment of a process for labeling location information of an item in a target image according to an exemplary embodiment of the present application includes the following steps:
step 901: and selecting target pixel points which are not included in any set in the target image as current pixel points, wherein the target pixel points are used for representing articles.
Step 902: determining whether a target pixel point which has the same color value as the current pixel point and is not included in any pixel point set exists in all the pixel points adjacent to the current pixel point; if yes, go to step 803; if not, go to step 904;
step 903: the target pixel point and the current pixel point are classified into the same pixel point set, the target pixel point is used as the current pixel point, and the step 902 is returned to be executed;
the steps 901 to 903 are explained as follows:
first, in the embodiment of the present application, for convenience of description, a pixel point of a three-dimensional virtual object model corresponding to an object in a target image is referred to as a target pixel point.
As an example, a target pixel point that is not included in any set may be selected as a current pixel point in the target image, for example, the Q point in fig. 10 is used as the current pixel point. Then, determining whether the current pixel has the same color value and is not included in a target pixel of any pixel set in all pixels adjacent to the current pixel; if yes, the target pixel point and the current pixel point can be classified into the same pixel point set, the target pixel point is used as the current pixel point, the step 802 is executed again until no target pixel point which has the same color value as the current pixel point and is not classified into any pixel point set exists in all the pixel points adjacent to the current pixel point, and a complete pixel point set is obtained. For example, as shown in fig. 10, all the pixels in the area represented by the ellipse form a pixel set, and the pixel set corresponds to a three-dimensional virtual object model of an article.
Step 904: and aiming at the pixel point set, marking the object represented by the target pixel point in the pixel point set by an external rectangular frame in the target image.
In this step, for the pixel point set, in the target image, an external rectangular frame may be used to mark the article represented by the target pixel point in the pixel point set, for example, as shown in fig. 11, this is an example of the target image with marking information.
It can be seen from the above embodiments that, by classifying adjacent pixels with the same color value into the same pixel set, and labeling, in the target image, the article represented by the target pixel in the pixel set with an external rectangular frame for the pixel set, the position information of the article in the target image can be labeled, which is convenient for subsequent image recognition model training based on the target image.
So far, the description of the fourth embodiment is completed.
Corresponding to the embodiment of the method for generating the three-dimensional virtual scene, the application also provides an embodiment of a device for generating the three-dimensional virtual scene.
Referring to fig. 12, a block diagram of an embodiment of an apparatus for generating a three-dimensional virtual scene according to an exemplary embodiment of the present application is provided, where the apparatus may include: a first generation module 121, a second generation module 122, and a third generation module 123.
The first generating module 121 may be configured to generate a corresponding three-dimensional virtual environment model for the target area;
a second generating module 122, configured to generate a corresponding three-dimensional virtual object model for each target object to be placed in the target area;
the third generating module 123 may be configured to place, based on the set placement rule of each target object in the target area, a three-dimensional virtual object model corresponding to each target object in the three-dimensional virtual environment model to form a three-dimensional virtual scene.
In one embodiment, the target object includes at least a shelf, an item;
the third generating module 123 may include (not shown in fig. 12):
the first placement submodule is used for placing the three-dimensional virtual object model corresponding to each shelf in the three-dimensional virtual environment model according to the set shelf placement rule;
and the second placing sub-module is used for placing the three-dimensional virtual object model corresponding to each article in the three-dimensional virtual object model corresponding to each shelf according to the set article placing rule.
In an embodiment, the second placement sub-module may include (not shown in fig. 12):
the identification submodule is used for identifying an article placement area in the three-dimensional virtual object model corresponding to each shelf;
the article determining sub-module is used for determining a target article to be placed in each article placing area;
and the article placing sub-module is used for placing the three-dimensional virtual object model corresponding to the target article in the article placing area according to the set article placing rule.
In an embodiment, the item determination submodule is specifically configured to:
and matching the size information of the virtual accommodating space of the article placing area with the size information of the three-dimensional virtual object model corresponding to each article, and determining a target article to be placed in the article placing area according to a matching result.
In an embodiment, the apparatus may further comprise (not shown in fig. 12):
an output module, configured to generate at least one article display diagram according to the size information of the virtual accommodation space of the article placement area and the size information of the three-dimensional virtual object model corresponding to the target article, and output the at least one article display diagram;
and the rule setting module is used for acquiring a target article display schematic diagram selected by a user and generating an article placement rule according to the target article display schematic diagram.
In an embodiment, the item placement sub-module may include (not shown in fig. 12):
a coordinate system construction submodule for establishing a corresponding three-dimensional space coordinate system for the article placing area, wherein a coordinate origin of the three-dimensional space coordinate system corresponds to an end point of one end of the article placing area, an X-axis direction of the three-dimensional space coordinate system corresponds to a horizontal direction of the article placing area, a Y-axis direction corresponds to a vertical direction of the article placing area, and a Z-axis direction is perpendicular to the X-axis direction and the Y-axis direction;
and the first processing submodule is used for placing a three-dimensional virtual object model corresponding to the target object on the object placing area from the coordinate origin of the three-dimensional space coordinate system along the X-axis direction, the Y-axis direction and the Z-axis direction of the three-dimensional space coordinate system until a preset object placing condition is met.
In an embodiment, the first processing sub-module is specifically configured to:
firstly, taking the coordinate origin as a first current point, placing a three-dimensional virtual object model corresponding to a target article at the first current point, after the placing is successful, shifting the first current point by a first distance value S1 along the X-axis direction, wherein S1 is the length of the three-dimensional virtual object model corresponding to the target article, and judging whether the distance between the first current point and a first designated point is greater than or equal to S1, the coordinate information of the first designated point is (E1, 0, 0), and E1 is the length of a virtual accommodation space of the article placement area; if yes, returning to the step of placing a three-dimensional virtual object model corresponding to the target object at the first current point, and if not, ending the current flow;
taking the second designated point as a second current point, setting the coordinate information of the second designated point as (0, S2, 0,) S2 as the height of the three-dimensional virtual object model corresponding to the target article, placing the three-dimensional virtual object model corresponding to the target article at the second current point, shifting the second current point along the Y-axis direction after successful placement by S2, judging whether the distance between the second current point and the third designated point is greater than or equal to S2, setting the coordinate information of the third designated point as (0, E2, 0), setting the E2 as the height of the virtual accommodating space of the article placing area, if so, returning to the step of placing the three-dimensional virtual object model corresponding to the target article at the second current point, and if not, ending the current flow;
and taking the fourth appointed point as a third current point, setting the coordinate information of the fourth appointed point as (0, 0, S3) S3 as the width of the three-dimensional virtual object model corresponding to the target article, setting the three-dimensional virtual object model corresponding to one target article at the third current point, shifting the third current point along the Z-axis direction after the setting is successful S3, judging whether the distance between the third current point and the fifth appointed point is greater than or equal to S3, setting the coordinate information of the fifth appointed point as (0, 0, E3) and setting E3 as the width of the virtual accommodating space of the article placing area, if so, returning to the step of setting the three-dimensional virtual object model corresponding to one target article at the third current point, and if not, ending the current flow.
In an embodiment, the third generating module 123 may include (not shown in fig. 12):
the third placement sub-module is used for placing a three-dimensional virtual object model corresponding to each target object in the three-dimensional virtual environment model based on the set placement rule of each target object in the target area to form a three-dimensional virtual scene model;
and the first rendering submodule is used for rendering the three-dimensional virtual scene model to form a three-dimensional virtual scene.
In an embodiment, the first rendering sub-module may include (not shown in fig. 12):
the first texture determining submodule is used for determining rendering textures corresponding to the three-dimensional virtual object model aiming at the three-dimensional virtual object model corresponding to each shelf;
the second texture determining submodule is used for determining rendering textures corresponding to the three-dimensional virtual object model aiming at the three-dimensional virtual object model corresponding to each article;
the second rendering submodule is used for rendering the three-dimensional virtual object model by using the rendering texture corresponding to the three-dimensional virtual object model aiming at the three-dimensional virtual object model corresponding to each article in the three-dimensional virtual scene model; and aiming at the three-dimensional virtual object model corresponding to each shelf, rendering the areas of the three-dimensional virtual object model except the three-dimensional virtual object model corresponding to the articles according to the rendering textures corresponding to the three-dimensional virtual object model.
In an embodiment, the apparatus may further comprise (not shown in fig. 12):
the parameter acquisition module is used for acquiring the set camera parameters of the virtual camera in the three-dimensional virtual scene model;
the matrix determination module is used for determining a conversion matrix for phase conversion between a world coordinate system and an image coordinate system according to the camera parameters, wherein the world coordinate system is a coordinate system corresponding to the three-dimensional virtual scene;
the information conversion module is used for converting the coordinate position information of each object in the three-dimensional virtual scene in a world coordinate system into target position information in an image coordinate system by using the conversion matrix;
and the mapping module is used for mapping each object in the three-dimensional virtual scene to a corresponding position on the two-dimensional image configuration surface according to the target position information of the object in the image coordinate system to obtain a target image.
In an embodiment, the apparatus may further comprise (not shown in fig. 12):
the marking module is used for marking the position information, the type information and/or the quantity information of the three-dimensional virtual object model corresponding to the object in the target image;
and the model training module is used for training the image recognition model according to at least one item of the position information, the type information and the quantity information of the marked articles.
In one embodiment, the tagging module may include (not shown in fig. 12):
the selection submodule is used for selecting target pixel points which are not included in any set in the target image as current pixel points, and the target pixel points are used for representing articles;
the judgment submodule is used for determining whether a target pixel point which has the same color value as the current pixel point and is not included in any pixel point set exists in all the pixel points adjacent to the current pixel point;
the set classification submodule is used for classifying the target pixel point and the current pixel point into the same pixel point set if the set classification submodule exists; taking the target pixel point as a current pixel point, and returning to the step of determining whether a target pixel point which has the same color value as the current pixel point and is not classified into any pixel point set exists in all pixel points adjacent to the current pixel point;
and the second processing submodule is used for marking out an external rectangular frame of the article corresponding to the target pixel point in the pixel point set in the target image aiming at the pixel point set.
With continued reference to fig. 13, the present application further provides an electronic device, which includes a processor 1301, a communication interface 1302, a memory 1303, and a communication bus 1304.
The processor 1301, the communication interface 1302 and the memory 1303 communicate with each other through a communication bus 1304;
a memory 1303 for storing a computer program;
the processor 1301 is configured to execute the computer program stored in the memory 1303, and when the processor 1301 executes the computer program, the steps of the method for generating a three-dimensional virtual scene provided in the embodiment of the present application are implemented.
The present application further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of the method for generating a three-dimensional virtual scene provided in the embodiments of the present application.
The implementation process of the functions and actions of each unit in the above device is specifically described in the implementation process of the corresponding step in the above method, and is not described herein again.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the scheme of the application. One of ordinary skill in the art can understand and implement it without inventive effort.
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the scope of protection of the present application.
Claims (15)
1. A method for generating a three-dimensional virtual scene, the method comprising:
generating a corresponding three-dimensional virtual environment model for the target area;
generating a corresponding three-dimensional virtual object model for each target object to be placed in the target area;
and placing a three-dimensional virtual object model corresponding to each target object in the three-dimensional virtual environment model based on the set placement rule of each target object in the target area to form a three-dimensional virtual scene.
2. The method of claim 1, wherein the target object includes at least a shelf, an item;
the step of placing a three-dimensional virtual object model corresponding to each target object in the three-dimensional virtual environment model based on the set placement rule of each target object in the target area to form a three-dimensional virtual scene includes:
placing the three-dimensional virtual object model corresponding to each shelf in the three-dimensional virtual environment model according to the set shelf placement rule;
and placing the three-dimensional virtual object model corresponding to each article in the three-dimensional virtual object model corresponding to each shelf according to the set article placement rule.
3. The method of claim 2, wherein the placing the three-dimensional virtual object model corresponding to each item in the three-dimensional virtual object model corresponding to each shelf according to the set item placement rules comprises:
identifying an article placement area in the three-dimensional virtual object model corresponding to each shelf;
aiming at each article placement area, determining a target article to be placed in the article placement area;
and placing the three-dimensional virtual object model corresponding to the target object in the object placing area according to the set object placing rule.
4. The method of claim 3, wherein the determining the target item to be placed in the item placement area comprises:
and matching the size information of the virtual accommodating space of the article placing area with the size information of the three-dimensional virtual object model corresponding to each article, and determining a target article to be placed in the article placing area according to a matching result.
5. The method of claim 3, wherein the item placement rules are set by:
generating at least one article display schematic diagram according to the size information of the virtual accommodating space of the article placing area and the size information of the three-dimensional virtual object model corresponding to the target article, and outputting the at least one article display schematic diagram;
and acquiring a target article display diagram selected by a user, and generating an article placement rule according to the target article display diagram.
6. The method of claim 2, wherein placing the three-dimensional virtual object model corresponding to the target item in the item placement area comprises:
establishing a corresponding three-dimensional space coordinate system for the article placing area, wherein a coordinate origin of the three-dimensional space coordinate system corresponds to an end point of one end of the article placing area, an X-axis direction of the three-dimensional space coordinate system corresponds to a horizontal direction of the article placing area, a Y-axis direction corresponds to a vertical direction of the article placing area, and a Z-axis direction is perpendicular to the X-axis direction and the Y-axis direction;
and placing a three-dimensional virtual object model corresponding to the target object on the object placing area from the coordinate origin of the three-dimensional space coordinate system along the X-axis direction, the Y-axis direction and the Z-axis direction of the three-dimensional space coordinate system until a preset object placing condition is met.
7. The method according to claim 6, wherein the placing a three-dimensional virtual object model corresponding to the target object on the object placing region from the origin of coordinates of the three-dimensional space coordinate system and along the X-axis direction, the Y-axis direction and the Z-axis direction of the three-dimensional space coordinate system until a preset object placing condition is met comprises:
firstly, taking the coordinate origin as a first current point, placing a three-dimensional virtual object model corresponding to a target article at the first current point, after the placing is successful, shifting the first current point by a first distance value S1 along the X-axis direction, wherein S1 is the length of the three-dimensional virtual object model corresponding to the target article, and judging whether the distance between the first current point and a first designated point is greater than or equal to S1, the coordinate information of the first designated point is (E1, 0, 0), and E1 is the length of a virtual accommodation space of the article placement area; if yes, returning to the step of placing a three-dimensional virtual object model corresponding to the target object at the first current point, and if not, ending the current flow;
taking the second designated point as a second current point, setting the coordinate information of the second designated point as (0, S2, 0,) S2 as the height of the three-dimensional virtual object model corresponding to the target article, placing the three-dimensional virtual object model corresponding to the target article at the second current point, shifting the second current point along the Y-axis direction after successful placement by S2, judging whether the distance between the second current point and the third designated point is greater than or equal to S2, setting the coordinate information of the third designated point as (0, E2, 0), setting the E2 as the height of the virtual accommodating space of the article placing area, if so, returning to the step of placing the three-dimensional virtual object model corresponding to the target article at the second current point, and if not, ending the current flow;
and taking the fourth appointed point as a third current point, setting the coordinate information of the fourth appointed point as (0, 0, S3) S3 as the width of the three-dimensional virtual object model corresponding to the target article, setting the three-dimensional virtual object model corresponding to one target article at the third current point, shifting the third current point along the Z-axis direction after the setting is successful S3, judging whether the distance between the third current point and the fifth appointed point is greater than or equal to S3, setting the coordinate information of the fifth appointed point as (0, 0, E3) and setting E3 as the width of the virtual accommodating space of the article placing area, if so, returning to the step of setting the three-dimensional virtual object model corresponding to one target article at the third current point, and if not, ending the current flow.
8. The method according to claim 2, wherein the placing a three-dimensional virtual object model corresponding to each target object in the three-dimensional virtual environment model based on the set placement rule of each target object in the target area, and forming a three-dimensional virtual scene comprises:
based on the set placement rule of each target object in the target area, placing a three-dimensional virtual object model corresponding to each target object in the three-dimensional virtual environment model to form a three-dimensional virtual scene model;
rendering the three-dimensional virtual scene model to form a three-dimensional virtual scene.
9. The method of claim 8, wherein the rendering the three-dimensional virtual scene model comprises:
aiming at the three-dimensional virtual object model corresponding to each shelf, determining rendering textures corresponding to the three-dimensional virtual object model;
aiming at the three-dimensional virtual object model corresponding to each article, determining rendering textures corresponding to the three-dimensional virtual object model;
in the three-dimensional virtual scene model, aiming at a three-dimensional virtual object model corresponding to each article, rendering the three-dimensional virtual object model by using rendering textures corresponding to the three-dimensional virtual object model; and aiming at the three-dimensional virtual object model corresponding to each shelf, rendering the areas of the three-dimensional virtual object model except the three-dimensional virtual object model corresponding to the articles according to the rendering textures corresponding to the three-dimensional virtual object model.
10. The method of claim 2, further comprising:
acquiring the set camera parameters of a virtual camera in the three-dimensional virtual scene model;
determining a transformation matrix for phase transformation between a world coordinate system and an image coordinate system according to the camera parameters, wherein the world coordinate system is a coordinate system corresponding to the three-dimensional virtual scene;
converting coordinate position information of each object in the three-dimensional virtual scene in a world coordinate system into target position information in an image coordinate system by using the conversion matrix;
and for each object in the three-dimensional virtual scene, mapping the object to a corresponding position on a two-dimensional image configuration surface according to the target position information of the object in the image coordinate system to obtain a target image.
11. The method of claim 10, wherein after the mapping the object to the corresponding position on the two-dimensional image disposition surface according to the object position information of the object in the image coordinate system to obtain the object image, the method further comprises:
marking the position information, the type information and/or the quantity information of the object in the target image;
and training an image recognition model according to at least one item of the marked position information, type information and quantity information of each article.
12. The method of claim 11, wherein labeling the location information of the item in the target image comprises:
selecting target pixel points which are not classified into any set in the target image as current pixel points, wherein the target pixel points are used for representing articles;
determining whether a target pixel point which has the same color value as the current pixel point and is not included in any pixel point set exists in all the pixel points adjacent to the current pixel point;
if yes, the target pixel point and the current pixel point are classified into the same pixel point set; taking the target pixel point as a current pixel point, and returning to the step of determining whether a target pixel point which has the same color value as the current pixel point and is not classified into any pixel point set exists in all pixel points adjacent to the current pixel point; if not, ending the current flow;
and aiming at the pixel point set, marking out the external rectangular frame of the article corresponding to the target pixel point in the pixel point set in the target image.
13. An apparatus for generating a three-dimensional virtual scene, the apparatus comprising:
the first generation module is used for generating a corresponding three-dimensional virtual environment model for the target area;
the second generation module is used for generating a corresponding three-dimensional virtual object model for each target object to be placed in the target area;
and the third generation module is used for placing the three-dimensional virtual object model corresponding to each target object in the three-dimensional virtual environment model based on the set placing rule of each target object in the target area to form a three-dimensional virtual scene.
14. The apparatus of claim 13, wherein the target object comprises at least a shelf, an item;
the third generating module comprises:
the first placement submodule is used for placing the three-dimensional virtual object model corresponding to each shelf in the three-dimensional virtual environment model according to the set shelf placement rule;
and the second placing sub-module is used for placing the three-dimensional virtual object model corresponding to each article in the three-dimensional virtual object model corresponding to each shelf according to the set article placing rule.
15. The apparatus of claim 14, wherein the second placement sub-module comprises:
the identification submodule is used for identifying an article placement area in the three-dimensional virtual object model corresponding to each shelf;
the article determining sub-module is used for determining a target article to be placed in each article placing area;
and the article placing sub-module is used for placing the three-dimensional virtual object model corresponding to the target article in the article placing area according to the set article placing rule.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910747186.XA CN112396688B (en) | 2019-08-14 | 2019-08-14 | Three-dimensional virtual scene generation method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910747186.XA CN112396688B (en) | 2019-08-14 | 2019-08-14 | Three-dimensional virtual scene generation method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112396688A true CN112396688A (en) | 2021-02-23 |
CN112396688B CN112396688B (en) | 2023-09-26 |
Family
ID=74602733
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910747186.XA Active CN112396688B (en) | 2019-08-14 | 2019-08-14 | Three-dimensional virtual scene generation method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112396688B (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113760389A (en) * | 2021-04-19 | 2021-12-07 | 北京沃东天骏信息技术有限公司 | Shelf display method, equipment, storage medium and program product based on three dimensions |
CN113763113A (en) * | 2021-03-04 | 2021-12-07 | 北京沃东天骏信息技术有限公司 | Article display method and device |
CN114065334A (en) * | 2020-08-04 | 2022-02-18 | 广东博智林机器人有限公司 | Method and device for determining measurement position of virtual guiding rule and storage medium |
CN115048001A (en) * | 2022-06-16 | 2022-09-13 | 亮风台(云南)人工智能有限公司 | Virtual object display method and device, electronic equipment and storage medium |
WO2022227910A1 (en) * | 2021-04-28 | 2022-11-03 | 腾讯科技(深圳)有限公司 | Virtual scene generation method and apparatus, and computer device and storage medium |
CN117495666A (en) * | 2023-12-29 | 2024-02-02 | 山东街景智能制造科技股份有限公司 | Processing method for generating 2D data based on 3D drawing |
CN117876642A (en) * | 2024-03-08 | 2024-04-12 | 杭州海康威视系统技术有限公司 | Digital model construction method, computer program product and electronic equipment |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102609934A (en) * | 2011-12-22 | 2012-07-25 | 中国科学院自动化研究所 | Multi-target segmenting and tracking method based on depth image |
CN103425825A (en) * | 2013-08-02 | 2013-12-04 | 苏州两江科技有限公司 | 3D supermarket displaying method based on CAD graphic design drawing |
CN106991548A (en) * | 2016-01-21 | 2017-07-28 | 阿里巴巴集团控股有限公司 | A kind of warehouse goods yard planing method, device and electronic installation |
CN107393017A (en) * | 2017-08-11 | 2017-11-24 | 北京铂石空间科技有限公司 | Image processing method, device, electronic equipment and storage medium |
CN108427820A (en) * | 2017-08-12 | 2018-08-21 | 中民筑友科技投资有限公司 | A kind of shelf simulation management method and system based on BIM |
CN108804061A (en) * | 2017-05-05 | 2018-11-13 | 上海盟云移软网络科技股份有限公司 | The virtual scene display method of virtual reality system |
CN109685905A (en) * | 2017-10-18 | 2019-04-26 | 深圳市掌网科技股份有限公司 | Cell planning method and system based on augmented reality |
-
2019
- 2019-08-14 CN CN201910747186.XA patent/CN112396688B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102609934A (en) * | 2011-12-22 | 2012-07-25 | 中国科学院自动化研究所 | Multi-target segmenting and tracking method based on depth image |
CN103425825A (en) * | 2013-08-02 | 2013-12-04 | 苏州两江科技有限公司 | 3D supermarket displaying method based on CAD graphic design drawing |
CN106991548A (en) * | 2016-01-21 | 2017-07-28 | 阿里巴巴集团控股有限公司 | A kind of warehouse goods yard planing method, device and electronic installation |
CN108804061A (en) * | 2017-05-05 | 2018-11-13 | 上海盟云移软网络科技股份有限公司 | The virtual scene display method of virtual reality system |
CN107393017A (en) * | 2017-08-11 | 2017-11-24 | 北京铂石空间科技有限公司 | Image processing method, device, electronic equipment and storage medium |
CN108427820A (en) * | 2017-08-12 | 2018-08-21 | 中民筑友科技投资有限公司 | A kind of shelf simulation management method and system based on BIM |
CN109685905A (en) * | 2017-10-18 | 2019-04-26 | 深圳市掌网科技股份有限公司 | Cell planning method and system based on augmented reality |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114065334A (en) * | 2020-08-04 | 2022-02-18 | 广东博智林机器人有限公司 | Method and device for determining measurement position of virtual guiding rule and storage medium |
CN113763113A (en) * | 2021-03-04 | 2021-12-07 | 北京沃东天骏信息技术有限公司 | Article display method and device |
CN113763113B (en) * | 2021-03-04 | 2024-07-16 | 北京沃东天骏信息技术有限公司 | Article display method and device |
CN113760389A (en) * | 2021-04-19 | 2021-12-07 | 北京沃东天骏信息技术有限公司 | Shelf display method, equipment, storage medium and program product based on three dimensions |
WO2022227910A1 (en) * | 2021-04-28 | 2022-11-03 | 腾讯科技(深圳)有限公司 | Virtual scene generation method and apparatus, and computer device and storage medium |
CN115048001A (en) * | 2022-06-16 | 2022-09-13 | 亮风台(云南)人工智能有限公司 | Virtual object display method and device, electronic equipment and storage medium |
CN117495666A (en) * | 2023-12-29 | 2024-02-02 | 山东街景智能制造科技股份有限公司 | Processing method for generating 2D data based on 3D drawing |
CN117495666B (en) * | 2023-12-29 | 2024-03-19 | 山东街景智能制造科技股份有限公司 | Processing method for generating 2D data based on 3D drawing |
CN117876642A (en) * | 2024-03-08 | 2024-04-12 | 杭州海康威视系统技术有限公司 | Digital model construction method, computer program product and electronic equipment |
CN117876642B (en) * | 2024-03-08 | 2024-06-11 | 杭州海康威视系统技术有限公司 | Digital model construction method, computer program product and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN112396688B (en) | 2023-09-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112396688B (en) | Three-dimensional virtual scene generation method and device | |
CN108062784B (en) | Three-dimensional model texture mapping conversion method and device | |
JP6902122B2 (en) | Double viewing angle Image calibration and image processing methods, equipment, storage media and electronics | |
CN108401461A (en) | Three-dimensional mapping method, device and system, cloud platform, electronic equipment and computer program product | |
CN110411441A (en) | System and method for multi-modal mapping and positioning | |
CN110568447A (en) | Visual positioning method, device and computer readable medium | |
CN109795830A (en) | It is automatically positioned the method and device of logistics tray | |
Miller et al. | Interactive 3D model acquisition and tracking of building block structures | |
CN109816769A (en) | Scene map generation method, device and equipment based on depth camera | |
CN110210328A (en) | The method, apparatus and electronic equipment of object are marked in image sequence | |
JP2009080578A (en) | Multiview-data generating apparatus, method, and program | |
CN109711472B (en) | Training data generation method and device | |
US11209277B2 (en) | Systems and methods for electronic mapping and localization within a facility | |
CN110310315A (en) | Network model training method, device and object pose determine method, apparatus | |
US20220415030A1 (en) | AR-Assisted Synthetic Data Generation for Training Machine Learning Models | |
US11900552B2 (en) | System and method for generating virtual pseudo 3D outputs from images | |
CN109559349A (en) | A kind of method and apparatus for calibration | |
CN105096376B (en) | A kind of information processing method and electronic equipment | |
CN111161387A (en) | Method and system for synthesizing image in stacked scene, storage medium and terminal equipment | |
CN111161388B (en) | Method, system, device and storage medium for generating retail commodity shelf images | |
CN109389634A (en) | Virtual shopping system based on three-dimensional reconstruction and augmented reality | |
WO2021167586A1 (en) | Systems and methods for object detection including pose and size estimation | |
CN115641322A (en) | Robot grabbing method and system based on 6D pose estimation | |
CN112184793A (en) | Depth data processing method and device and readable storage medium | |
EP3825804A1 (en) | Map construction method, apparatus, storage medium and electronic device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |