CN109754463B - Three-dimensional modeling fusion method and device - Google Patents
Three-dimensional modeling fusion method and device Download PDFInfo
- Publication number
- CN109754463B CN109754463B CN201910025548.4A CN201910025548A CN109754463B CN 109754463 B CN109754463 B CN 109754463B CN 201910025548 A CN201910025548 A CN 201910025548A CN 109754463 B CN109754463 B CN 109754463B
- Authority
- CN
- China
- Prior art keywords
- dimensional model
- target object
- key
- action
- live
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000007500 overflow downdraw method Methods 0.000 title claims abstract description 21
- 238000005070 sampling Methods 0.000 claims abstract description 42
- 238000000034 method Methods 0.000 claims abstract description 24
- 238000012545 processing Methods 0.000 claims description 20
- 238000013507 mapping Methods 0.000 claims description 14
- 230000004927 fusion Effects 0.000 claims description 10
- 230000010354 integration Effects 0.000 abstract 1
- 238000005516 engineering process Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000012876 topography Methods 0.000 description 2
- 241001464837 Viridiplantae Species 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Images
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Processing Or Creating Images (AREA)
- Image Generation (AREA)
Abstract
The invention provides a three-dimensional modeling fusion method and device, and relates to the technical field of three-dimensional geographic information. The three-dimensional modeling fusion method comprises the following steps: and acquiring a plurality of sampling images, and establishing a real-scene three-dimensional model of the target object in the first format according to the plurality of sampling images. And acquiring a plurality of key region pictures of the sampled images, and establishing a three-dimensional model of the key region in a second format according to the key region pictures, wherein the second format and the first format are different. And fusing the live three-dimensional model of the target object and the three-dimensional model of the key region to obtain an optimized model of the target object. The method has the advantages that only pictures of key areas are required to be finely modeled, meanwhile, the integration of two different format models can be achieved, modeling cost is low, and efficiency is high.
Description
Technical Field
The invention relates to the technical field of three-dimensional geographic information, in particular to a three-dimensional modeling fusion method and device.
Background
The oblique photography technology is a high-new technology developed in recent years in the international mapping field, breaks through the limitation that normal photography can only be shot from a vertical angle in the past, acquires images from five different angles such as a vertical angle, a four-inclined angle and the like through carrying a plurality of sensors on the same flight platform, rapidly acquires rich top and side textures of a building, truly reflects surrounding conditions of ground objects, and introduces users into a real visual world conforming to human vision.
The oblique photography technology in the prior art has the advantages of high efficiency, high precision, high reality and low cost. And when oblique photography modeling is carried out, all oblique photography models are used for modeling, and the scene is attractive and exquisite.
However, for some projects, only three-dimensional modeling is needed in a central key area, and if fine three-dimensional modeling is performed on all scenes based on oblique photography model modeling, problems such as high modeling cost and low modeling efficiency are caused.
Disclosure of Invention
The invention aims to provide a three-dimensional modeling fusion method and device for solving the problems of high modeling cost and low modeling efficiency in the prior art.
In order to achieve the above purpose, the technical scheme adopted by the embodiment of the invention is as follows:
in a first aspect, an embodiment of the present invention provides a three-dimensional modeling fusion method, including: acquiring a plurality of sampling images, and establishing a real-scene three-dimensional model of a target object in a first format according to the plurality of sampling images;
acquiring a plurality of key region pictures of the sampled images, and establishing a three-dimensional model of the key region in a second format according to the key region pictures, wherein the second format and the first format are different formats;
and fusing the live three-dimensional model of the target object and the three-dimensional model of the key region to obtain an optimized model of the target object.
Further, the fusing the live three-dimensional model of the target object and the three-dimensional model of the key region to obtain an optimized model of the target object includes:
deleting key areas in the live-action three-dimensional model to obtain a residual live-action three-dimensional model;
splicing the three-dimensional model of the key area to the residual live-action three-dimensional model, and fusing the three-dimensional model of the key area and the edges of the residual live-action three-dimensional model to obtain the optimized model of the target object.
Further, the three-dimensional model of the key area and the three-dimensional model of the residual live-action are triangle network models;
splicing the three-dimensional model of the key area to the residual live-action three-dimensional model, and fusing the three-dimensional model of the key area and the edges of the residual live-action three-dimensional model to obtain an optimized model of the target object, wherein the method comprises the following steps:
splicing the three-dimensional model of the key area to the residual live-action three-dimensional model;
and fusing the edge triangular points of the three-dimensional model of the key area with the edge triangular points of the three-dimensional model of the residual live-action to obtain the optimized model of the target object.
Further, the obtaining a plurality of sampling images, and building a real-scene three-dimensional model of the target object in the first format according to the plurality of sampling images includes:
acquiring a plurality of sampled images, adding the sampled images into a coordinate system of a control point, and obtaining a plurality of sampled image external orientation elements through space three encryption operation, wherein the sampled image external orientation elements are image gestures;
generating a white mode of the target object according to the external azimuth element of the sampling image;
obtaining homonymy points of a plurality of sampled images according to an image matching algorithm;
generating a plurality of target objects corresponding to the sampling images according to the homonymous points of the sampling images;
and calculating texture information of the target object, and mapping the texture information to a white model of the target object to obtain a real-scene three-dimensional model of the target object in the first format.
Further, the obtaining the pictures of the key areas in the plurality of sampled images, and establishing a three-dimensional model of the key areas in the second format according to the pictures of the key areas includes:
acquiring pictures of key areas in a plurality of sampled images, and generating contour lines of the key areas in each sampled image;
generating a white mode of the key area according to the contour line;
and mapping the texture information of the key area to a white mode of the key area to generate a three-dimensional model of the key area in the second format.
In a second aspect, an embodiment of the present invention further provides a three-dimensional modeling fusion apparatus, including: the first acquisition module is used for acquiring a plurality of sampling images and establishing a real-scene three-dimensional model of the target object in a first format according to the plurality of sampling images;
the second acquisition module is used for acquiring a plurality of key region pictures of the sampled images and establishing a three-dimensional model of the key region with a second format according to the key region pictures, wherein the second format and the first format are different formats;
and the processing module is used for fusing the real three-dimensional model of the target object and the three-dimensional model of the key area to obtain an optimized model of the target object.
Further, the processing module is specifically configured to delete a key region in the live-action three-dimensional model to obtain a residual live-action three-dimensional model; splicing the three-dimensional model of the key area to the residual live-action three-dimensional model, and fusing the three-dimensional model of the key area and the edges of the residual live-action three-dimensional model to obtain the optimized model of the target object.
Further, the three-dimensional model of the key area and the three-dimensional model of the residual live-action are triangle network models;
the processing module is specifically configured to splice the three-dimensional model of the key area to the remaining live-action three-dimensional model, and fuse the three-dimensional model of the key area with edges of the remaining live-action three-dimensional model to obtain an optimized model of the target object.
Further, the first obtaining module is specifically configured to obtain a plurality of sampled images, add the plurality of sampled images to a coordinate system of a control point, and obtain the sampled image external orientation element through space three encryption operation, where the sampled image external orientation element is a target object pose; generating a white mode of the target object according to the external azimuth element of the sampling image; obtaining homonymy points of a plurality of sampled images according to an image matching algorithm; generating a plurality of target objects corresponding to the sampling images according to the homonymous points of the sampling images; and calculating texture information of the target object, and mapping the texture information to a white model of the target object to obtain a real-scene three-dimensional model of the target object in the first format.
Further, the second obtaining module is specifically configured to obtain pictures of key areas in the plurality of sampled images, and generate contour lines of the key areas in each sampled image;
generating a white mode of the key area according to the contour line; and mapping the texture information of the key area to a white mode of the key area to generate a three-dimensional model of the key area in the second format.
The beneficial effects of the invention are as follows:
the three-dimensional modeling fusion method provided by the invention comprises the following steps: and acquiring a plurality of sampling images, and establishing a real-scene three-dimensional model of the target object in the first format according to the plurality of sampling images. And acquiring a plurality of key region pictures of the sampled images, and establishing a three-dimensional model of the key region in a second format according to the key region pictures, wherein the second format and the first format are different. And fusing the live three-dimensional model of the target object and the three-dimensional model of the key region to obtain an optimized model of the target object. The three-dimensional modeling fusion method provided by the invention only needs to finely model pictures of key areas, and can integrate two models with different formats, so that the modeling cost is low and the rate is high.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of a three-dimensional modeling fusion method provided by the present application;
FIG. 2 is a second schematic flow chart of a three-dimensional modeling fusion method provided in the present application;
FIG. 3 is a schematic flow chart III of a three-dimensional modeling fusion method provided by the application;
fig. 4 is a schematic flow chart of a three-dimensional modeling fusion method provided in the present application;
FIG. 5 is a schematic diagram of a three-dimensional modeling fusion device provided by the present application;
fig. 6 is a schematic diagram two of a three-dimensional modeling fusion device provided in the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention.
Fig. 1 is a schematic flow chart of a three-dimensional modeling fusion method provided in the present application. As shown in fig. 1, the present invention provides a three-dimensional modeling fusion method, including:
s110: and acquiring a plurality of sampling images, and establishing a real-scene three-dimensional model of the target object in the first format according to the plurality of sampling images.
The multiple sampling images acquired in this embodiment may be obtained by oblique photography, and the oblique photography technology is a high-new technology developed in recent years in the field of international mapping, and is that multiple imaging devices are mounted on the same aircraft, and images are acquired from different angles such as vertical and oblique angles, so as to acquire different acquired images. For example, when the aircraft flies horizontally, one camera device is parallel to the ground, and other camera devices form a certain angle with the ground, so that different acquired images are acquired.
When the aerial vehicle collects the landform image, the collected image is stored in a memory of the aerial vehicle, and after the image collection is completed, a plurality of sampled images are exported from the memory to the application terminal for processing. The application terminal may be a desktop computer, a notebook computer, a tablet computer or a mobile phone, and the specific terminal form is not limited herein.
Further, the derived acquired images are processed through corresponding software to obtain a live-action three-dimensional model of the topography and the landform shot by the aircraft, and the live-action three-dimensional model is derived into a first format. The corresponding software may be photoscan, photoMesh, contextCaptureCenter, etc., wherein the first format may be xlm, klm, osbg format, etc. The specific application software and export format of the embodiment is not limited, as long as a plurality of acquired images can be processed into a live three-dimensional model.
S120: and acquiring a plurality of key region pictures of the sampled images, and establishing a three-dimensional model of the key region in a second format according to the key region pictures.
Because the live-action three-dimensional model obtained through S110 processing does not have the individuation, the later application expansion and the hooking property are not facilitated, and therefore the information contained in the live-action three-dimensional model needs to be processed finely, and the information comprises: building, terrain, green plants, street lamps, other things, etc. If all the information contained in the live-action three-dimensional model is processed, the problems of long construction period, resource waste and the like are caused. For some projects, only the precise three-dimensional modeling is needed for the heavy point area, so that the time is shortened, and the cost is reduced.
Therefore, in this embodiment, only the heavy area is finely modeled, and before the fine modeling is performed, a manual field photographing is required to collect the picture of the heavy area, and the picture of the heavy area is imported into corresponding software for processing, so as to obtain a fine three-dimensional model of the heavy area, and exported into the second format.
It should be noted that, in the three-dimensional modeling method provided in this embodiment, the first format is different from the second format. Also, the software for fine three-dimensional modeling of the heavy spot area may be DP-modeler, 3DMax software, etc., wherein the second format may be obg, dwg, iges, etc. The specific application software for fine modeling and the second format are not limited in this embodiment, as long as a fine three-dimensional model of the key region can be built.
S130: and fusing the live three-dimensional model of the target object and the three-dimensional model of the key region to obtain an optimized model of the target object.
It should be noted that, in the three-dimensional model fusing the live-action three-dimensional model and the key region provided in this embodiment, software for optimizing the live-action three-dimensional model and the key region three-dimensional model may be compatible with the first format and the second format at the same time, for example, may be hypermap software (SuperMap) software, three-dimensional software (Skyline) software, and the specific compatible format is not limited in this embodiment.
According to the three-dimensional modeling fusion method provided by the embodiment, firstly, a live-action three-dimensional model of a target object is established, then, fine three-dimensional modeling is carried out on pictures of a heavy-point area, and finally, the obtained live-action three-dimensional model and the three-dimensional model of the heavy-point area are fused. The method has the advantages of high efficiency, high reality, high precision, low cost, fineness, attractive appearance, monomerized editing, easy application expansion and hooking property in later period, low cost, high efficiency and the like.
Referring to fig. 2, fig. 2 is a schematic flow chart of a three-dimensional modeling fusion method provided in the present application.
Fusing the live three-dimensional model of the target object and the three-dimensional model of the key region to obtain an optimized model of the target object, wherein the method comprises the following steps:
s210: and deleting key areas in the live-action three-dimensional model to obtain a residual live-action three-dimensional model.
And carrying out module division on the obtained live-action three-dimensional model, and deleting the region needing to establish the fine three-dimensional model in the live-action three-dimensional model to obtain the residual live-action three-dimensional model.
S220: splicing the three-dimensional model of the key area to the residual live-action three-dimensional model, and fusing the three-dimensional model of the key area and the edges of the residual live-action three-dimensional model to obtain the optimized model of the target object.
Further, the three-dimensional model of the key area and the residual live-action three-dimensional model are triangle network models.
The triangular net is one form of horizontal control net, and is formed by connecting a plurality of triangles and used for representing the ground relief situation. By collecting discrete point data of the ground, a triangular net model is generated so as to simulate a terrain model of an aircraft shooting area, and therefore a user can conveniently analyze characteristics such as landforms, topography and the like according to the model.
Splicing the three-dimensional model of the key area to the residual live-action three-dimensional model, and fusing the three-dimensional model of the key area and the edges of the residual live-action three-dimensional model to obtain an optimized model of the target object, wherein the method comprises the following steps: and splicing the three-dimensional model of the key area to the residual live-action three-dimensional model.
The method comprises the steps of fusing the edge triangular points of the three-dimensional model of the key area with the edge triangular points of the three-dimensional model of the residual live-action to obtain an optimized model of the target object.
It should be noted that when deleting the area where the fine three-dimensional model needs to be built in the live-action three-dimensional model, the edge of the remaining live-action three-dimensional model may appear uneven, for example, the road in the image may become uneven, and some buildings may have half of the phenomenon, at this time, the edge needs to be bordered, and the buildings on the edge are flattened, so that fusion of the live-action three-dimensional model and the three-dimensional model of the key area is achieved, and the optimized target object model is obtained.
As shown in fig. 3, fig. 3 is a schematic flow chart three of a three-dimensional modeling fusion method provided in the present application.
Optionally, the process of obtaining a plurality of sampling images and establishing a real-scene three-dimensional model of the target object in the first format according to the plurality of sampling images may be performed in a contextcapturementer modeling system, where the first format is exemplified by an osbg format. The specific process is as follows:
s310: and acquiring a plurality of sampled images, adding the sampled images into a coordinate system of a control point, and performing space three encryption operation to obtain a plurality of sampled image external orientation elements, wherein the sampled image external orientation elements are image gestures.
The space three encryption operation process is that a plurality of sampling images and control points are loaded in a ContextCapurementer modeling system, calling sub-software (such as HANGF software) adopts a beam method area network integral adjustment, a beam of light consisting of one image is used as an adjustment unit, a collinear equation of center projection is used as a basic equation of the adjustment unit, the best intersection of common light among models is realized through rotation and translation of each light beam in space, and the integral area is optimally added into a control point coordinate system, so that the spatial position relationship among ground objects is restored, and further, the external azimuth elements of the plurality of sampling images can be obtained.
S320: and generating a white mode of the target object according to the external azimuth element of the sampling image.
S330: and obtaining the homonymy points of the plurality of sampled images according to an image matching algorithm.
And automatically matching homonymous points of the plurality of sampling images according to a high-precision image matching algorithm in a ContextCaptureentity modeling system, wherein the homonymous points are the same parts in the plurality of sampling images, and extracting more characteristic points from the images to form a dense point cloud, so that the details of the ground object are more accurately expressed. The more complex the ground features, the denser the building, the higher the dot density, and conversely, the more sparse.
S340: and generating target objects corresponding to the plurality of sampling images according to the homonymous points of the plurality of sampling images.
After the matching of the homonymous points affected by the multiple samples is completed, the multiple sampled images can be integrated into a complete target object model.
S350: and calculating texture information of the target object, and mapping the texture information to a white model of the target object to obtain a real-scene three-dimensional model of the target object in the first format.
Further, the obtained live-action three-dimensional model is exported into an osbg format in a ContextCaptureContmenter modeling system.
As shown in fig. 4, fig. 4 is a schematic flow chart three of a three-dimensional modeling fusion method provided in the present application.
Optionally, the process of obtaining pictures of key areas in a plurality of sampled images and establishing a three-dimensional model of the key areas in a second format according to the pictures of the key areas may be performed in DP-modeler software, where the above second format is exemplified by deriving obg format, and the specific process is as follows:
s410: and acquiring pictures of key areas in a plurality of sampled images, and generating contour lines of the key areas in each sampled image.
It should be noted that, the pictures of the key areas in the sampled images obtained in this embodiment are realized by manual photographing in the field. And the quantity of the pictures collected by the field industry is manually and subjectively determined, which part is considered to be a key area needs to be subjected to fine modeling, the area is manually photographed, and the pictures obtained by photographing are imported into DP-Moderler software to generate the contour line of the key area.
S420: and generating a white mode of the key area according to the contour line.
And operating the DP-Moderler software to automatically generate a white mode corresponding to the key region.
S430: and mapping the texture information of the key area to a white mode of the key area to generate a three-dimensional model of the key area in the second format.
The three-dimensional model of the key region can be obtained through the process, and the obtained three-dimensional model of the key region is exported in the DP-Moderler software to be obg format.
Further, the system for integrating the live-action three-dimensional model and the three-dimensional model of the key region used in the embodiment is a hypermap system (SuperMap), and the hypermap system can be compatible with the obtained osbg format and obg format at the same time, so that the two different types of formats do not need to be converted into a unified format, and the processing efficiency is improved.
Fig. 5 is a schematic diagram of a three-dimensional modeling fusion device provided in the present application. As shown in fig. 5, the apparatus specifically includes: a first acquisition module 501, a second acquisition module 502, and a processing module 503. Wherein,,
the first obtaining module 501 is configured to obtain a plurality of sampled images, and build a live three-dimensional model of the target object in the first format according to the plurality of sampled images.
The second obtaining module 502 is configured to obtain a plurality of pictures of an important area of the sampled image, and establish a three-dimensional model of the important area of the second format according to the pictures of the important area, where the second format is different from the first format.
And the processing module 503 is configured to fuse the live three-dimensional model of the target object and the three-dimensional model of the key region to obtain an optimized model of the target object.
Optionally, the processing module 503 is specifically configured to delete a key region in the live-action three-dimensional model, so as to obtain a remaining live-action three-dimensional model. Splicing the three-dimensional model of the key area to the three-dimensional model of the residual live-action, and fusing the three-dimensional model of the key area and the edges of the three-dimensional model of the residual live-action to obtain the optimized model of the target object.
Optionally, the three-dimensional model of the key area and the three-dimensional model of the residual live-action are triangle mesh models. The processing module 503 is specifically further configured to splice the three-dimensional model of the key area to the remaining live-action three-dimensional model, fuse the three-dimensional model of the key area with edges of the remaining live-action three-dimensional model, obtain an optimized model of the target object,
optionally, the first obtaining module 501 is specifically configured to obtain a plurality of sampled images, add the plurality of sampled images to a coordinate system of a control point, and obtain an external azimuth element of the sampled image through space three encryption operation, where the external azimuth element of the sampled image is a target object pose. And generating a white mode of the target object according to the external azimuth element of the sampling image. And obtaining homonymy points of the plurality of sampling images according to an image matching algorithm. And generating target objects corresponding to the plurality of sampling images according to the homonymy points of the plurality of sampling images. And calculating texture information of the target object, and mapping the texture information to a white model of the target object to obtain a real-scene three-dimensional model of the target object in the first format.
Optionally, the second obtaining module 502 is specifically configured to obtain pictures of key areas in the plurality of sampled images, and generate contour lines of the key areas in each sampled image. And generating a white mode of the key area according to the contour line. And mapping the texture information of the key area to a white mode of the key area to generate a three-dimensional model of the key area in the second format.
The foregoing apparatus is used for executing the method provided in the foregoing embodiment, and its implementation principle and technical effects are similar, and are not described herein again.
The above modules may be one or more integrated circuits configured to implement the above methods, for example: one or more application specific integrated circuits (Application Specific Integrated Circuit, abbreviated as ASIC), or one or more microprocessors (digital singnal processor, abbreviated as DSP), or one or more field programmable gate arrays (Field Programmable Gate Array, abbreviated as FPGA), or the like. For another example, when a module above is implemented in the form of a processing element scheduler code, the processing element may be a general-purpose processor, such as a central processing unit (Central Processing Unit, CPU) or other processor that may invoke the program code. For another example, the modules may be integrated together and implemented in the form of a system-on-a-chip (SOC).
Fig. 6 is a schematic diagram of a three-dimensional modeling fusion device according to an embodiment of the present application. The apparatus may be integrated in a terminal device or a chip of the terminal device, and the terminal may be a computing device having an image processing function.
The device comprises: memory 601, and processor 602.
The memory 601 is used for storing a program, and the processor 602 calls the program stored in the memory 601 to execute the above-described method embodiment. The specific implementation manner and the technical effect are similar, and are not repeated here.
Optionally, the present invention also provides a program product, such as a computer readable storage medium, comprising a program for performing the above-described method embodiments when being executed by a processor.
In the several embodiments provided by the present invention, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in hardware plus software functional units.
The integrated units implemented in the form of software functional units described above may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium, and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (english: processor) to perform some of the steps of the methods according to the embodiments of the invention. And the aforementioned storage medium includes: u disk, mobile hard disk, read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), magnetic disk or optical disk, etc.
Claims (8)
1. A three-dimensional modeling fusion method, comprising:
acquiring a plurality of sampling images, and establishing a real-scene three-dimensional model of a target object in a first format according to the plurality of sampling images;
acquiring a plurality of key region pictures of the sampled images, and establishing a three-dimensional model of the key region in a second format according to the key region pictures, wherein the second format and the first format are different formats;
fusing the live three-dimensional model of the target object and the three-dimensional model of the key region to obtain an optimized model of the target object;
fusing the live three-dimensional model of the target object and the three-dimensional model of the key region to obtain an optimized model of the target object, wherein the method comprises the following steps:
deleting key areas in the live-action three-dimensional model to obtain a residual live-action three-dimensional model;
splicing the three-dimensional model of the key area to the residual live-action three-dimensional model, and fusing the three-dimensional model of the key area and the edges of the residual live-action three-dimensional model to obtain the optimized model of the target object.
2. The three-dimensional modeling fusion method of claim 1, wherein the three-dimensional model of the key region and the remaining live-action three-dimensional model are triangle mesh models;
splicing the three-dimensional model of the key area to the residual live-action three-dimensional model, and fusing the three-dimensional model of the key area and the edges of the residual live-action three-dimensional model to obtain an optimized model of the target object, wherein the method comprises the following steps:
splicing the three-dimensional model of the key area to the residual live-action three-dimensional model;
and fusing the edge triangular points of the three-dimensional model of the key area with the edge triangular points of the three-dimensional model of the residual live-action to obtain the optimized model of the target object.
3. The method of any one of claims 1-2, wherein the acquiring a plurality of sampled images and creating a real-scene three-dimensional model of the target object in the first format based on the plurality of sampled images comprises:
acquiring a plurality of sampled images, adding the sampled images into a coordinate system of a control point, and obtaining a plurality of sampled image external orientation elements through space three encryption operation, wherein the sampled image external orientation elements are image gestures;
generating a white mode of the target object according to the external azimuth element of the sampling image;
obtaining homonymy points of a plurality of sampled images according to an image matching algorithm;
generating a plurality of target objects corresponding to the sampling images according to the homonymous points of the sampling images;
and calculating texture information of the target object, and mapping the texture information to a white model of the target object to obtain a real-scene three-dimensional model of the target object in the first format.
4. The method of claim 1, wherein the obtaining pictures of the key areas in the plurality of sampled images and establishing the three-dimensional model of the key areas in the second format according to the pictures of the key areas comprises:
acquiring pictures of key areas in a plurality of sampled images, and generating contour lines of the key areas in each sampled image;
generating a white mode of the key area according to the contour line;
and mapping the texture information of the key area to a white mode of the key area to generate a three-dimensional model of the key area in the second format.
5. A three-dimensional modeling fusion device, comprising:
the first acquisition module is used for acquiring a plurality of sampling images and establishing a real-scene three-dimensional model of the target object in a first format according to the plurality of sampling images;
the second acquisition module is used for acquiring a plurality of key region pictures of the sampled images and establishing a three-dimensional model of the key region with a second format according to the key region pictures, wherein the second format and the first format are different formats;
the processing module is used for fusing the live three-dimensional model of the target object and the three-dimensional model of the key area to obtain an optimized model of the target object;
the processing module is specifically used for deleting key areas in the live-action three-dimensional model to obtain a residual live-action three-dimensional model; splicing the three-dimensional model of the key area to the residual live-action three-dimensional model, and fusing the three-dimensional model of the key area and the edges of the residual live-action three-dimensional model to obtain the optimized model of the target object.
6. The three-dimensional modeling fusion device of claim 5, wherein the three-dimensional model of the emphasized region and the remaining live-action three-dimensional model are triangulated models;
the processing module is specifically configured to splice the three-dimensional model of the key area to the remaining live-action three-dimensional model, and fuse the three-dimensional model of the key area with edges of the remaining live-action three-dimensional model to obtain an optimized model of the target object.
7. The three-dimensional modeling fusion device according to any one of claims 5 to 6, wherein the first obtaining module is specifically configured to obtain a plurality of sampled images, add the plurality of sampled images to a coordinate system of a control point, and obtain the sampled image external orientation element through an empty three-encryption operation, where the sampled image external orientation element is a target object pose; generating a white mode of the target object according to the external azimuth element of the sampling image; obtaining homonymy points of a plurality of sampled images according to an image matching algorithm; generating a plurality of target objects corresponding to the sampling images according to the homonymous points of the sampling images; and calculating texture information of the target object, and mapping the texture information to a white model of the target object to obtain a real-scene three-dimensional model of the target object in the first format.
8. The three-dimensional modeling fusion device of claim 7,
the second acquisition module is specifically configured to acquire pictures of key areas in a plurality of sampled images, and generate contour lines of the key areas in each sampled image;
generating a white mode of the key area according to the contour line; and mapping the texture information of the key area to a white mode of the key area to generate a three-dimensional model of the key area in the second format.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910025548.4A CN109754463B (en) | 2019-01-11 | 2019-01-11 | Three-dimensional modeling fusion method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910025548.4A CN109754463B (en) | 2019-01-11 | 2019-01-11 | Three-dimensional modeling fusion method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109754463A CN109754463A (en) | 2019-05-14 |
CN109754463B true CN109754463B (en) | 2023-05-23 |
Family
ID=66405459
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910025548.4A Active CN109754463B (en) | 2019-01-11 | 2019-01-11 | Three-dimensional modeling fusion method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109754463B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113496507B (en) * | 2020-03-20 | 2024-09-27 | 华为技术有限公司 | Human body three-dimensional model reconstruction method |
CN111681322B (en) * | 2020-06-12 | 2021-02-02 | 中国测绘科学研究院 | Fusion method of oblique photography model |
CN111915739A (en) * | 2020-08-13 | 2020-11-10 | 广东申义实业投资有限公司 | Real-time three-dimensional panoramic information interactive information system |
CN114170273A (en) * | 2021-12-08 | 2022-03-11 | 南方电网电力科技股份有限公司 | Target tracking method based on binocular camera and related device |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9767566B1 (en) * | 2014-09-03 | 2017-09-19 | Sprint Communications Company L.P. | Mobile three-dimensional model creation platform and methods |
CN108665536A (en) * | 2018-05-14 | 2018-10-16 | 广州市城市规划勘测设计研究院 | Three-dimensional and live-action data method for visualizing, device and computer readable storage medium |
CN109118581A (en) * | 2018-08-22 | 2019-01-01 | Oppo广东移动通信有限公司 | Image processing method and device, electronic equipment, computer readable storage medium |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106611441B (en) * | 2015-10-27 | 2019-01-08 | 腾讯科技(深圳)有限公司 | The treating method and apparatus of three-dimensional map |
CN108919944B (en) * | 2018-06-06 | 2022-04-15 | 成都中绳科技有限公司 | Virtual roaming method for realizing data lossless interaction at display terminal based on digital city model |
-
2019
- 2019-01-11 CN CN201910025548.4A patent/CN109754463B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9767566B1 (en) * | 2014-09-03 | 2017-09-19 | Sprint Communications Company L.P. | Mobile three-dimensional model creation platform and methods |
CN108665536A (en) * | 2018-05-14 | 2018-10-16 | 广州市城市规划勘测设计研究院 | Three-dimensional and live-action data method for visualizing, device and computer readable storage medium |
CN109118581A (en) * | 2018-08-22 | 2019-01-01 | Oppo广东移动通信有限公司 | Image processing method and device, electronic equipment, computer readable storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN109754463A (en) | 2019-05-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109754463B (en) | Three-dimensional modeling fusion method and device | |
CN108665536B (en) | Three-dimensional and live-action data visualization method and device and computer readable storage medium | |
CN110532985B (en) | Target detection method, device and system | |
CN107862744B (en) | Three-dimensional modeling method for aerial image and related product | |
Verhoeven | Taking computer vision aloft–archaeological three‐dimensional reconstructions from aerial photographs with photoscan | |
CN109064542B (en) | Threedimensional model surface hole complementing method and device | |
WO2023280038A1 (en) | Method for constructing three-dimensional real-scene model, and related apparatus | |
CN112927362B (en) | Map reconstruction method and device, computer readable medium and electronic equipment | |
CN112560137B (en) | Multi-model fusion method and system based on smart city | |
JP2016537901A (en) | Light field processing method | |
CN109685893B (en) | Space integrated modeling method and device | |
CN112270736B (en) | Augmented reality processing method and device, storage medium and electronic equipment | |
CN113256781A (en) | Rendering device and rendering device of virtual scene, storage medium and electronic equipment | |
CN113436338A (en) | Three-dimensional reconstruction method and device for fire scene, server and readable storage medium | |
CN114332134A (en) | Building facade extraction method and device based on dense point cloud | |
CN116503566A (en) | Three-dimensional modeling method and device, electronic equipment and storage medium | |
CN112053440A (en) | Method for determining individualized model and communication device | |
WO2024222848A1 (en) | Data mining system, method and apparatus based on image-text information combination | |
JP2022518402A (en) | 3D reconstruction method and equipment | |
CN117635875B (en) | Three-dimensional reconstruction method, device and terminal | |
CN116051980B (en) | Building identification method, system, electronic equipment and medium based on oblique photography | |
CN113409473B (en) | Method, device, electronic equipment and storage medium for realizing virtual-real fusion | |
CN116912817A (en) | Three-dimensional scene model splitting method and device, electronic equipment and storage medium | |
CN109064555B (en) | Method, apparatus and storage medium for 3D modeling | |
CN115661364A (en) | Three-dimensional simulation model reconstruction method for cultural relic and ancient building group restoration |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |