CN110866531A - Building feature extraction method and system based on three-dimensional modeling and storage medium - Google Patents
Building feature extraction method and system based on three-dimensional modeling and storage medium Download PDFInfo
- Publication number
- CN110866531A CN110866531A CN201910975930.1A CN201910975930A CN110866531A CN 110866531 A CN110866531 A CN 110866531A CN 201910975930 A CN201910975930 A CN 201910975930A CN 110866531 A CN110866531 A CN 110866531A
- Authority
- CN
- China
- Prior art keywords
- building
- image
- dimensional modeling
- dimensional
- feature extraction
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000605 extraction Methods 0.000 title claims abstract description 40
- 238000003860 storage Methods 0.000 title claims abstract description 8
- 238000000034 method Methods 0.000 claims abstract description 51
- 238000012545 processing Methods 0.000 claims abstract description 13
- 238000007781 pre-processing Methods 0.000 claims abstract description 9
- 238000012937 correction Methods 0.000 claims description 16
- 238000004422 calculation algorithm Methods 0.000 claims description 14
- 230000011218 segmentation Effects 0.000 claims description 12
- 238000003064 k means clustering Methods 0.000 claims description 11
- 238000013507 mapping Methods 0.000 claims description 9
- 230000001788 irregular Effects 0.000 claims description 6
- 230000003068 static effect Effects 0.000 abstract description 15
- 238000010191 image analysis Methods 0.000 abstract description 6
- 238000003672 processing method Methods 0.000 abstract description 6
- 230000008569 process Effects 0.000 description 14
- 230000006870 function Effects 0.000 description 9
- 238000009826 distribution Methods 0.000 description 8
- ATJFFYVFTNAWJD-UHFFFAOYSA-N Tin Chemical compound [Sn] ATJFFYVFTNAWJD-UHFFFAOYSA-N 0.000 description 7
- 230000008901 benefit Effects 0.000 description 5
- 230000008859 change Effects 0.000 description 5
- 238000010276 construction Methods 0.000 description 5
- 239000011159 matrix material Substances 0.000 description 4
- 238000013439 planning Methods 0.000 description 4
- 230000009466 transformation Effects 0.000 description 4
- 238000012952 Resampling Methods 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 3
- 238000005070 sampling Methods 0.000 description 3
- 238000009825 accumulation Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000005520 cutting process Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 238000012216 screening Methods 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 238000007476 Maximum Likelihood Methods 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 230000032683 aging Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000009430 construction management Methods 0.000 description 1
- 238000013144 data compression Methods 0.000 description 1
- 238000013075 data extraction Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000011835 investigation Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 239000002245 particle Substances 0.000 description 1
- 230000036544 posture Effects 0.000 description 1
- 238000013139 quantization Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000035807 sensation Effects 0.000 description 1
- 238000004904 shortening Methods 0.000 description 1
- 238000011524 similarity measure Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
- G06F18/23213—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/29—Graphical models, e.g. Bayesian networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/155—Segmentation; Edge detection involving morphological operators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/187—Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
- G06T2207/10012—Stereo images
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Probability & Statistics with Applications (AREA)
- Image Processing (AREA)
Abstract
The invention relates to a building feature extraction method, a building feature extraction system and a storage medium based on three-dimensional modeling. The method for extracting the building features based on the three-dimensional modeling comprises the steps of obtaining an orthoimage and an oblique image, and preprocessing the images; carrying out three-dimensional modeling on the obtained image; the invention also comprises a building feature extraction system based on three-dimensional modeling, which comprises an acquisition unit, a storage unit and a display unit, wherein the acquisition unit is used for acquiring the orthoimage and the oblique image and preprocessing the images; and the processing unit is used for carrying out three-dimensional modeling on the preprocessed image. By adopting the technical means of extracting, filling and identifying the features after three-dimensional modeling of the image, the invention overcomes the technical problems that the difficulty of extracting the static target by the two-dimensional image is high, and the image analysis processing method of the moving target cannot be used for positioning and segmenting the static building in the prior art, realizes the reduction of the difficulty of extracting the static target features, and can use the image analysis processing method of the moving target for positioning and segmenting the static building.
Description
Technical Field
The invention relates to a high-precision real-time three-dimensional reconstruction method for a mobile terminal, in particular to a building feature extraction method and system based on three-dimensional modeling and a storage medium.
Background
In urban management work in China, urban construction supervision is an important function and is responsible for important tasks of urban building comprehensive supervision. In order to grasp the change of a city and implement effective management, a city administration department including a plurality of departments such as planning, design and construction needs to dynamically grasp information such as a building area and a land area of the city. With the rapid development of urbanization in China, due to various reasons, a large number of illegal buildings exist in urban buildings, and how to quickly and accurately acquire urban building information is a difficult problem for urban construction management and supervision departments, urban planning and designing departments and the like.
The unmanned aerial vehicle has the advantages that the unmanned aerial vehicle can be rapidly developed and applied without human, a better platform is provided for comprehensive supervision of urban buildings in China, and the unmanned aerial vehicle is emphasized in the field of comprehensive supervision of urban buildings due to the maneuverability, flexibility and convenience of the unmanned aerial vehicle, and is applied to departments such as urban construction supervision, urban planning and the like of many cities at present.
In the prior art, in the urban building investigation supervision work based on unmanned aerial vehicle, still mainly utilize unmanned aerial vehicle to shoot the photo, then utilize the manual work to carry out the contrastive analysis on the picture, for example, through the ground building situation of change in the same parcel in different periods, then contrast planning examination and approval information to discover which buildings belong to illegal buildings in this parcel. The supervision means through aerial photography such as satellite remote sensing, common aerial remote sensing, unmanned aerial vehicles and manual observation not only consumes a large amount of manpower and material resources, but also has a long data acquisition period and lacks maneuvering flexibility, and is not suitable for short-term and high-frequency urban dynamic monitoring. In addition, unlike moving object detection, since a building and the ground surface belong to the same static object, a method for processing a moving object cannot be used for positioning and dividing the static building, and a new research scheme needs to be considered to solve the problem.
Disclosure of Invention
The present invention is directed to solving, at least to some extent, one of the technical problems in the related art. Therefore, an object of the present invention is to provide a building feature extraction method based on three-dimensional modeling, which can reduce the difficulty of extracting features of static objects, and can use an image analysis processing method of moving objects for the beneficial effects of positioning and segmenting static buildings.
To this end, a second object of the present invention is to provide a modeling system based on three-dimensional feature extraction.
The technical scheme adopted by the invention is as follows:
in a first aspect, the invention provides a building feature extraction method based on three-dimensional modeling, which comprises the following steps:
s10, acquiring an orthoimage and an oblique image, and preprocessing the images;
s20, carrying out three-dimensional modeling on the image acquired in S10;
further, still include:
s30: building feature segmentation and extraction;
s40, building area filling;
and S50, identifying the characteristic points of the building.
Further, step S20 specifically includes:
s21, inputting images of positive photography and oblique photography of the unmanned aerial vehicle;
s22, geometric correction of the image;
s23, carrying out overall combination adjustment on the regions;
s24, densely matching the multi-view images;
s25, generating a digital surface model and/or a three-dimensional irregular triangular net;
s26: orthorectification;
s27: generating a three-dimensional database;
further, step S25 is followed by:
s28, modeling a three-dimensional irregular triangular net;
s29, autonomous texture mapping;
s210: generating three-dimensional scenes
Further, step S30 specifically includes:
s31, classifying and extracting the building features by adopting a K-Means clustering algorithm;
s32: performing K-Means clustering segmentation on longitudinal coordinate data of the three-dimensional coordinates of the point cloud data according to a set search strategy;
and S33, classifying the targets with the similar longitudinal coordinate data in the step S32 as a base for further extraction of the building.
Specifically, a K-Means clustering algorithm is adopted to classify and extract the building features, K-Means clustering segmentation is carried out on longitudinal coordinate data of three-dimensional coordinates of point cloud data according to a set search strategy, targets with similar heights are classified into one class, and the class is used as the basis for further extraction of the building.
Further, step S40 specifically includes:
s41: corroding and expanding the clustered and segmented image by adopting mathematical morphology operation to remove noise;
s42: and filling holes in the connected regions in the image.
Specifically, in S40, mathematical morphology operation is used to corrode and expand the clustered and segmented image to remove noise, and holes in the connected regions in the image are filled to ensure the integrity of the segmented building.
Further, in S50, a bayesian network classifier is specifically used to extract the position of the feature point of the contour, and finally the contour of the building is obtained, and the area of the building area can be further calculated.
In a second aspect, the present invention provides a modeling system based on three-dimensional feature extraction, including:
the acquiring unit is used for acquiring an orthoimage and an oblique image and preprocessing the images;
and the processing unit is used for carrying out three-dimensional modeling on the preprocessed image.
Further, the processing unit is also used for building feature segmentation extraction, building area filling and building feature point identification.
In a third aspect, the present invention provides a computer-readable storage medium having stored thereon computer-executable instructions for causing a computer to perform the method referred to herein.
The invention has the beneficial effects that:
by adopting the technical means of extracting the features after three-dimensional modeling of the image, the invention overcomes the technical problems that the difficulty of extracting the static target by the two-dimensional image is high, and the image analysis processing method of the moving target cannot be used for positioning and segmenting the static building in the prior art, realizes the beneficial effects of reducing the difficulty of extracting the static target features, and the image analysis processing method of the moving target can be used for positioning and segmenting the static building.
Drawings
FIG. 1 is a basic flow chart of the implementation of an embodiment 1 of a building feature extraction method based on three-dimensional modeling;
FIG. 2 is an overall flow chart of the implementation of embodiment 2 of the building feature extraction method based on three-dimensional modeling;
FIG. 3 is a three-dimensional modeling flowchart of an embodiment 3 of a method for extracting building features based on three-dimensional modeling according to the present invention;
FIG. 4 is a flow chart of a 4K-means algorithm of an embodiment of a building feature extraction method based on three-dimensional modeling.
Detailed Description
DSM: a Digital Surface Model (DSM) is a ground elevation Model that includes the height of Surface buildings, bridges, trees, etc.
TIN: three-dimensional Irregular triangulation networks (abbreviated TIN), also known as "surface data structures", divide an area into an equal Network of triangular faces according to a finite set of points of the area, the digital elevation consisting of successive triangular faces whose shape and size depend on the density and position of the irregularly distributed measuring points.
DOM: digital ortho-image (Digital ortho-Map) refers to image data generated by cutting a scanned and processed Digital aerial photo and a remote sensing image according to an image range through pixel-by-pixel correction by using a Digital elevation model, DOM information is relatively intuitive, and the DOM information has good interpretability and measurability, and natural geographic, social and economic information can be directly extracted from the DOM information. The data extraction can be used for displaying to a two-dimensional map, or vectorizing the extracted image data for extracting vector data.
TDOM: true ortho images (True Digital ortho maps) are images formed by correcting the geometric distortion of the original image using DSM using Digital differential correction techniques so that every point on the image is corrected to a vertical viewing angle.
DEM: digital Elevation Model (Digital Elevation Model): the digital simulation of the ground terrain (namely the digital expression of the terrain surface form) is realized by limited terrain elevation data, and the digital simulation is a solid ground model which expresses the ground elevation in a group of ordered numerical value array forms.
GCP: ground control point (groupcontrolpoint):
it should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
Example 1: the embodiment provides a building feature extraction method based on three-dimensional modeling, which is a basic flow chart of the building feature extraction method based on three-dimensional modeling, as shown in fig. 1, and includes the following steps:
s10, acquiring an orthoimage and an oblique image, and preprocessing the images;
specifically, the image preprocessing includes checking and rejecting unsatisfactory pictures, picture color smoothing processing and the like, and the main purposes of the image preprocessing are to eliminate irrelevant information in the images, recover useful real information, enhance the detectability of the relevant information and simplify data to the maximum extent, so that the reliability of feature extraction, image segmentation, matching and identification is improved.
S20, carrying out three-dimensional modeling on the image acquired in S1;
specifically, a three-dimensional model is manufactured on the basis of two-dimensional geographic information, through program development, natural elements and construction elements of a city can be analyzed by using the system, and a user obtains a real and visual virtual city environment feeling through interactive operation. The method and the device have the advantages that the static target is extracted through the two-dimensional image, the characteristic extraction difficulty of the static target is reduced, and the purpose is to obtain three-dimensional model data of the urban shooting area.
S30: building feature segmentation and extraction;
specifically, the embodiment utilizes a K-means clustering algorithm to segment and extract the building features, the algorithm is based on an error sum of squares criterion and is also one of unsupervised learning, the similarity measure of the K-means clustering algorithm is Euclidean distance and belongs to hard segmentation clustering, and the error sum of squares is used as a criterion function to obtain a correction criterion of iterative operation. The sum of squares of errors is the sum of the difference between the sample and the clustering center, the larger the criterion function is, the larger the error is, so as to judge the classification accuracy, and the optimal classification can be achieved when the obtained criterion function is the minimum; the embodiment is to use the image analysis processing method of the moving target for the positioning and segmentation of the static building.
Example 2: the present embodiment provides a building feature extraction method based on three-dimensional modeling, as shown in fig. 2, which is an overall flowchart of the building feature extraction method based on three-dimensional modeling, and is an embodiment performed on the basis of embodiment 1, specifically, the method includes the following steps:
s40, building area filling;
and S50, identifying the characteristic points of the building.
Example 3: the embodiment provides a specific method for three-dimensional modeling of an image, as shown in fig. 3, including the following steps:
s21, inputting positive and oblique images of the unmanned plane;
preferably, the data that unmanned aerial vehicle survey and drawing obtained are image at first, and the method of obtaining the image has a lot, and unmanned aerial vehicle is mainly two at present: orthographic and oblique photography.
It can be understood that when shooting at high altitude, the aerial photogrammetry has a problem of occlusion. Oblique photography is a method of multi-angle photography, and aims to adjust the camera postures of camera array combinations on different sides of more objects except the top so that no dead angle exists between a stereo pair and a same-name point.
It can be understood that the oblique photography technology overturns the limitation that the original orthoimage can only be shot from a vertical angle, and a user is introduced into a real visual world which accords with human vision by carrying a plurality of sensors on the same flight platform and acquiring images from five different angles such as a vertical angle, four oblique angles and the like.
Unmanned aerial vehicle image acquisition cycle is short, and the ageing is strong, is favorable to obtaining the image fast, and the quick timely effectual statistics area of being convenient for can save a large amount of manpowers and time simultaneously.
S22, geometric correction of the image;
it is understood that the process of removing distortion from the image with geometric distortion may also be referred to as a process of quantitatively determining the correspondence between the image coordinates and the geographic coordinates on the image, i.e., coordinate transformation, i.e., projecting the data onto a plane to conform to the projection system. Its basic links are two: firstly, the transformation of pixel coordinates and secondly, the resampling of pixel brightness, and the more specific processing process comprises the following steps:
s221, inputting a digital image;
it is understood that the digital image is also called a digital image, i.e. a digitized image. Essentially a two-dimensional matrix, each point being called a picture element. The pixel space coordinate and the gray value are discretized, and the gray value is different with the point coordinate. The digital image can be directly generated when the scanning sensor of the space flight or aviation remote sensing images and is recorded on the magnetic tape; the analog photos can also be digitized by an image digitizer and recorded on a digital magnetic tape. The number of pixels of the digital image and the number of quantization levels of the gray scale of the pixels are usually an integer power of 2. The digital image expression mode can be converted from a space domain form to a frequency domain form through Fourier transformation, and various digital image processing such as data compression, image enhancement, automatic classification and the like can be carried out.
It will be appreciated that digital images are generally always expressed as a spatial gray scale function, organized as an array in the form of a matrix. This expression is similar to its real image. The method has the advantage that the matrix theory can be applied to analyze and process the image. However, when representing characteristics such as energy and correlation of a digital image, it is more convenient to use vector representation of the image than matrix representation. If the pixels are arranged in line order, the first pixel in the line following the image is immediately followed by the last pixel in the line preceding the image. The advantage is that the theory and method of vector analysis can be directly used when processing the image. When a vector is formed, the vector may be formed in the order of rows or columns. After a sequence is selected, subsequent processing is consistent therewith.
S222: establishing a correction change function;
s223: analyzing errors;
s224: transforming coordinates;
specifically, the method is called a direct conversion method, in which the misplaced pixel points are transported to the position.
It will be understood that, mathematically, the geometric correction is performed by selecting a set of ground control points GCP to establish a transformation between the original distorted image space and the corrected image space:
s225: resampling the image;
it can be understood that image resampling is a process of interpolating a new sampling point according to each adjacent original sampling point in digital photogrammetry and remote sensing processing. For example, image re-sampling is required in the process of image epipolar line queuing or image geometric correction, and in the process of digital correction and multiple image composition. The interpolation method includes a bilinear interpolation method, a bicubic convolution method, a nearest neighbor pixel method and the like. The first two geometric precisions are better; the latter method is the simplest and fast method, and does not destroy the gray information of the original image, but has low geometric precision.
S226: image output
S23: the integral combination adjustment of the areas;
it can be understood that, because the images participating in the three-dimensional modeling include not only orthoimage data but also oblique photography data, the multi-view image joint adjustment needs to fully consider the geometric deformation and the shielding relation among the images, and a pyramid matching strategy from coarse to fine can be adopted to perform homonymy point automatic matching and free net beam method adjustment on each level of images, so that a better homonymy point matching result can be obtained.
Specifically, the joint adjustment mentioned in step S23 means that the data of two different observation means are adjusted together. The method can transmit the control action of the existing image control points to the latest aerial image through homonymous points among multiple periods of images, the plane precision of the combined adjustment encryption points is basically consistent with the adjustment encryption precision of the area network of the conventional beam method, the elevation precision depends on the number of observed values of the homonymous points among the multiple periods of images, the more the observed values of the homonymous points are, the higher the elevation precision of the encryption points is, and therefore, as long as enough observed values of the homonymous points exist among the multiple periods of images, the encryption precision of the combined adjustment can completely meet the production requirement of surveying and mapping the photogrammetric topographic map. Through the combined adjustment, the influence of coordinate conversion errors is basically avoided, the accumulation of errors of the astronomical geodetic network system is effectively weakened, and the accuracy of the side length, the point position, the dimension, the orientation and the like of the astronomical geodetic network is greatly improved, so that the accuracy of a region coordinate system is improved, the national geodetic network is integrally improved, and the service range of the national geodetic coordinate system is expanded.
In order to fully utilize the existing extra-aircraft control and aerial photography results, the number of field control points is reduced or even does not need to be mapped in new operation, so that the purposes of saving operation cost and shortening operation period are achieved, and the combined adjustment encryption precision can completely meet the requirement of the embodiment as long as the number of observation values of the same-name points in a plurality of periods of images is enough.
S24, densely matching the multi-view images;
the key problem of multi-view image dense matching is how to fully consider redundant information in the matching process and quickly and accurately acquire the coordinates of the same-name points on the multi-view images.
Specifically, three multi-view images are selected for multi-view image matching, for example, the three multi-view images are marked as (v)a,vb, vc) Two of the three images are selected to construct a stereo pair for dense matching, e.g., (v)a,vb),(va,vc), (vb,vc) If at (v)a,vb) There is a matching point pair (f)a,fb) In (v)b,vc) There is a matching point pair (f)b,fc) Then, in (v)a,vc) There is certainly a matching point pair (f)a,fc) Thus, the result of the stereo-pair matching will satisfy the condition (f)a,fb,fc) Selected as multi-view image (v)a,vb,vc) The matching of the three images is completed by the matching points.
S25, generating a digital surface model and/or a three-dimensional irregular triangular net;
specifically, for the generation of the DSM, according to each image external orientation element calculated by automatic space-three solution, an appropriate image matching unit is analyzed and selected to perform feature matching and pixel-level intensive matching, and a parallel algorithm is introduced to improve the calculation efficiency.
S26: orthorectification;
specifically, the orthorectification relates to an object-side continuous DEM, a large number of ground object objects with greatly different discrete distribution particle sizes, and a large number of image-side multi-angle images, and has the characteristics of typical data density and computation density. Therefore, the orthographic correction of the multi-view image can be performed simultaneously by an object side and an image side.
Specifically, the orthorectification is a process of correcting image space and geometric distortion to generate an orthorectified image of a multi-center projection plane. It can correct the geometric distortion caused by general system factors and eliminate the geometric distortion caused by terrain. The method adopts a small amount of ground control points to be combined with a camera or satellite model, determines the simple relation of a camera or a sensor, an image and the ground, establishes a correct correction formula and generates an accurate orthoimage.
Specifically, the ortho-rectification generally includes selecting some ground control points on the shot, and performing tilt correction and projection difference correction on the image simultaneously by using the originally acquired DEM data in the shot range, so as to resample the image into an ortho-image. Splicing and inlaying a plurality of ortho images together, carrying out color balance treatment, and cutting out the images within a certain range to obtain the ortho images. The orthoimage has both topographic map characteristics and image characteristics, is rich in information, and can be used as a data source of a geographic information system, so that the representation form of the geographic information system is enriched.
In particular, orthorectification is a process of geometric distortion correction of an image that deals with significant geometric distortion caused by terrain, camera geometry, and sensor-related errors. The output orthorectified image will be an orthorectified planar real image.
It will be appreciated that from the perspective of photogrammetry, the satellite remote sensing image can be viewed as a central projection, which is necessarily affected by the observation angle of the sensor and the ground elevation (the ground elevation is the vertical height of the yellow sea plane as the reference plane), and particularly at the edges of the image, the parallax caused by the ground elevation is not negligible.
It will be appreciated that the purpose of the ortho-correction is to eliminate this parallax caused by the elevation of the ground, with the aid of the DEM, to obtain the correct position coordinates of a point.
S27: generating a three-dimensional database;
specifically, the key of three-dimensional modeling is to acquire ultra-high density point cloud of an image, and then construct a TIN model of a ground object through the point cloud, so as to obtain three-dimensional model data including DSM (digital surface model), DOM (document object model), TDOM (time difference of arrival) and vector data.
Wherein the following steps are also included after S25:
s28, TIN modeling;
in particular, the TIN approximates the terrain by using a series of non-intersecting, non-overlapping triangular faces that are joined together.
The advantage of the TIN model is that it can describe the terrain surface with different levels of resolution, more accurately represent more complex surfaces with less space and time at a certain resolution, and especially when the terrain contains a large number of features such as broken lines and construction lines, the TIN model can better take into account the features so as to more accurately and reasonably express the terrain morphology, i.e. the triangulation network model has the characteristics of high precision, high speed, high efficiency, easy processing of broken lines and ground objects, and the like.
S29, autonomous texture mapping;
as can be appreciated, texture mapping is the process of mapping texels in texture space to pixels in screen space. In three-dimensional graphics, the texture mapping method is most widely used, especially to describe objects with realistic sensation. Texture mapping is an important part of the production of realistic images, and by using the texture mapping, a very realistic graph can be conveniently produced without spending too much time to consider the surface details of an object.
Specifically, the implementation comprises the following steps:
s291: defining texture objects
const int TexNumber4;
GLuint mes _ Texture [ Texumber ]; // defining texture object array
S292: generating texture object arrays
glGenTextures(TexNumber,m_Texture);
S293: the definition of the texture object is completed by selecting the texture object using the glBindTexture.
glBindTexture(GL_TEXTURE 2D,m_Texture[0]);
glTexImage2D(GL_TEXTURE_2D,0,3,mes_Texmapl.GetWidth(),mee_Texmapl.GetHeight(),0,
GL_BGR_EXT,GL_UNSIGNED_BYTE,mse_Texmapl.GetDibBitsl'trQ);
S294: the scene is loaded with corresponding textures through the glBindTexture before the scene is drawn.
glBindTexture(GLes_TEXTURE_2D,mse_Texture[0]);
S295: calling glDeleteTextures deletes texture objects before the program ends.
glDeleteTextures(TexNumber,mee_Texture);
S210: a three-dimensional scene is generated.
Example 4: the embodiment provides a building feature segmentation and extraction method, as shown in fig. 4, the embodiment adopts a K-Means clustering algorithm to classify and extract building features, performs K-Means clustering segmentation on longitudinal coordinate data of three-dimensional coordinates of point cloud data of various features in various regions of a city according to a set search strategy, for example, from top to bottom or from left to right, classifies objects with similar heights into one class, and uses the class as a basis for further extraction of a building.
The specific process comprises the following steps:
the method comprises the steps of firstly carrying out preliminary classification and extraction on ground objects in a picture by utilizing the height characteristics of a building and adopting a K-Means clustering algorithm based on a three-dimensional model, considering that point clouds where contours of the top floor of the building are located have certain heights, and considering that the heights of the contours are larger than 2 meters preliminarily, and taking the contours as a preliminary screening scheme of a target place.
S31: inputting point cloud sample data;
specifically, a point cloud sample p is inputiA data set p of (i ═ 1, 2.., n)i(x, y, z), c is 1, c represents the clustering centers of different iteration rounds, k initial clustering centers z are selectedj(c),j=1,2,...,k。
S32, inputting the number n, k of participating clusters;
s33, initializing k clustering centers;
specifically, the distance D (p) between each point cloud sample and the clustering center is calculatedi,zj(c) I ═ 1,2,. and n, and classification is performed.
S34 assigning each data object to the closest cluster
Specifically, let c be c +1, calculate a new cluster center and a sum of squared error criterion f (objective function) value:
s35: recalculating the centers of the clusters;
s36, judging whether the clustering center is converged;
specifically, judge if | J(c+1)-Jc|<If θ (θ is the convergence discrimination threshold) or the object has no class change, the algorithm ends, otherwise, c is c +1, and the process returns to step S34.
And S37, outputting the clustering result.
Specifically, all point clouds p of the centroids of all the classes j corresponding to each point cloud sample i are recordedijK clusters are obtained. And carrying out primary judgment on the building by using the k point cloud centroids meeting the same j. And (4) considering the characteristic distribution rule of the point cloud of the urban building, after k clustering objects are obtained, re-screening k clustering results by taking the vertical coordinate value of the point cloud more than 2 meters as a preliminary basis for building judgment to obtain d new clustering ground objects which serve as a basis for further judging the buildings in the parcel.
Example 5: the embodiment provides a building region filling and feature point identification method, and particularly, the embodiment adopts mathematical morphology operation to corrode and expand a clustered and segmented image to remove noise, and fills holes in a connected region in the image to ensure the integrity of the segmented building.
On the basis of the discrimination result in the embodiment 4, the building area is filled, and meanwhile, the non-closed area is removed, so that the initial outline of the building is obtained. And (3) carrying out image morphology operation on the d new clustered surface features, wherein the specific method comprises the following steps:
at S41, a convolution kernel B is defined, which can be any shape and size and has a separately defined reference point, i.e., anchor point, typically a square or disk with reference points.
And S42, performing convolution on the kernel B and the clustered ground object d, and calculating the maximum value of the pixel points in the coverage area of the kernel B.
And S43, if the point cloud corresponding to the maximum value can not form a closed graph on the plane, regarding the area as a non-city building and removing the non-city building, returning to the S42, and if the point cloud can form the closed plane graph, regarding the city building program as the next step.
S44: and assigning the maximum value to the pixel designated by the reference point, and gradually increasing the area in the image until the maximum area satisfying d is reached to obtain the filled various clustered ground features.
Specifically, the embodiment also provides a building contour forming and building area calculating method, specifically, a bayesian network classifier is adopted to extract the position of the contour feature point, finally, the contour of the building is obtained, and the area of the building area can be further calculated.
Specifically, identifying building feature points in urban districts, constructing a Bayesian network classifier for each obtained clustering ground object, carrying out building feature point inference by using a Monte Carlo algorithm, and finally obtaining a contour sample which can represent the most building characteristics, thereby extracting the building districts which accord with certain probability distribution and realizing automatic extraction of the buildings.
The method comprises the following specific steps:
s51, constructing a Bayesian network classifier structure B (G, theta) for each clustering region, wherein the joint probability distribution of the attributes of each sample point is defined as:
and S52, adopting an EM (expecteration-Maximization) algorithm, namely an Expectation Maximization algorithm, to obtain a distribution parameter theta subject to the sample for the sample points in the Bayesian network classifier structure B (G, theta).
It can be understood that if the distribution parameter θ to which the sample obeys is known, the expected value of the hidden variable z can be estimated from the observed training sample as shown in step S521, and if the value of z is known, a new value of θ can be estimated by using the maximum likelihood method as shown in step S522. This process is repeated until the z and theta values no longer change. The method of step 1 and step 2 is as follows:
s521: deducing the hidden variable distribution P (z | x, θ (t)) with the current parameter θ (t), calculating the expectation of the log-likelihood function L (θ | x, z) with respect to z:
Q(θ|θ(t))=Ez|x,θ(t)L(θ|x,z)
s522: finding a parameter to maximize the value of the expected likelihood function, i.e.
And S53, obtaining the building classification by taking the point cloud where the characteristic point corresponding to the maximum expected distribution parameter theta is located as the characteristic point of the building.
Specifically, obtaining the point cloud p of the characteristic point of the building areaiA data set p of (i ═ 1, 2.., n)i(x, y, z) and then, calculating the area of the building area by the following method: the method comprises the following steps:
s61, clustering the ith building block feature point cloud p according to the clustering resultiOrthographically projecting the (x, y, z) coordinate to an f (x, y) plane to obtain a data set D of the ith clustering building block on a certain height zi z(x,y)={(x1,y1),(x2,y2),...,(xm,ym)};
S62, j is equal to 1, j is less than or equal to m, j is equal to j +1
i=i+1
The projection area of the ith clustering building block (height z) on the plane f (x, y) is obtained by n times of accumulation according to the above formula.
S63, repeating the step S63, and stopping the calculation when i is n, so as to obtain the projection area of all the building clusters on the plane f (x, y)
Example 6: the present embodiment provides a computer-readable storage medium storing computer-executable instructions for causing a computer to perform the methods involved in all the above embodiments.
While the preferred embodiments of the present invention have been illustrated and described, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (10)
1. A building feature extraction method based on three-dimensional modeling is characterized by comprising the following steps:
s10: acquiring an orthoimage and an oblique image, and preprocessing the images;
s20: performing three-dimensional modeling on the image acquired in the step S10;
s30: and (5) building feature segmentation and extraction.
2. The building feature extraction method based on three-dimensional modeling according to claim 1, further comprising:
s40, building area filling;
and S50, identifying the characteristic points of the building.
3. The building feature extraction method based on three-dimensional modeling according to claim 1, wherein the S20 specifically includes:
s21, inputting images of positive photography and oblique photography of the unmanned aerial vehicle;
s22, geometric correction of the image;
s23, carrying out overall combination adjustment on the regions;
s24, densely matching the multi-view images;
s25, generating a digital surface model and/or a three-dimensional irregular triangular net;
s26: orthorectification;
s27: and generating a three-dimensional database.
4. The building feature extraction method based on three-dimensional modeling according to claim 3, wherein said S25 is followed by further comprising:
s28, modeling a three-dimensional irregular triangular net;
s29, autonomous texture mapping;
s210: a three-dimensional scene is generated.
5. The building feature extraction method based on three-dimensional modeling according to claim 1, wherein step S30 specifically includes:
s31, classifying and extracting the building features by adopting a K-Means clustering algorithm;
s32: performing K-Means clustering segmentation on longitudinal coordinate data of the three-dimensional coordinates of the point cloud data according to a set search strategy;
and S33, classifying the targets with the similar longitudinal coordinate data in the step S32 as a base for further extraction of the building.
6. The building feature extraction method based on three-dimensional modeling according to claim 2 or 5, wherein the step S40 specifically comprises:
s41: corroding and expanding the clustered and segmented image by adopting mathematical morphology operation to remove noise;
s42: and filling holes in the connected regions in the image.
7. The building feature extraction method based on three-dimensional modeling according to claim 2, wherein step S50 specifically includes:
s51: extracting the position of the feature point of the contour by adopting a Bayesian network classifier, and identifying the feature point of the building to obtain the contour of the building;
s52: the area of the building area is calculated based on step S51.
8. A building feature extraction system based on three-dimensional modeling, the system comprising:
the acquiring unit is used for acquiring an orthoimage and an oblique image and preprocessing the images;
and the processing unit is used for carrying out three-dimensional modeling and building feature segmentation and extraction on the preprocessed image.
9. The building feature extraction system based on three-dimensional modeling according to claim 8, wherein the processing unit is further used for building area filling and building feature point identification.
10. A computer-readable storage medium having stored thereon computer-executable instructions for causing a computer to perform the method of any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910975930.1A CN110866531A (en) | 2019-10-15 | 2019-10-15 | Building feature extraction method and system based on three-dimensional modeling and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910975930.1A CN110866531A (en) | 2019-10-15 | 2019-10-15 | Building feature extraction method and system based on three-dimensional modeling and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110866531A true CN110866531A (en) | 2020-03-06 |
Family
ID=69652569
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910975930.1A Pending CN110866531A (en) | 2019-10-15 | 2019-10-15 | Building feature extraction method and system based on three-dimensional modeling and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110866531A (en) |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111563448A (en) * | 2020-04-30 | 2020-08-21 | 北京百度网讯科技有限公司 | Method and device for detecting illegal building, electronic equipment and storage medium |
CN111750849A (en) * | 2020-06-05 | 2020-10-09 | 武汉大学 | Target contour positioning and attitude-fixing adjustment method and system under multiple visual angles |
CN112085778A (en) * | 2020-08-04 | 2020-12-15 | 广东国地规划科技股份有限公司 | Oblique photography illegal building detection method and system based on superpixels and morphology |
CN112461209A (en) * | 2021-02-01 | 2021-03-09 | 国科天成科技股份有限公司 | Double-light fusion system of visible light and infrared light |
CN112862966A (en) * | 2021-02-20 | 2021-05-28 | 中煤航测遥感集团有限公司 | Method, device and equipment for constructing three-dimensional model of earth surface and storage medium |
CN113516777A (en) * | 2021-05-13 | 2021-10-19 | 天讯方舟(北京)信息科技有限公司 | Three-dimensional automatic modeling and visualization method for urban building |
CN113650783A (en) * | 2021-07-08 | 2021-11-16 | 江苏省地质测绘院 | Fixed wing oblique photography cadastral mapping method, system and equipment |
CN113674293A (en) * | 2021-08-20 | 2021-11-19 | 建信金融科技有限责任公司 | Picture processing method and device, electronic equipment and computer readable medium |
CN114120149A (en) * | 2021-11-09 | 2022-03-01 | 肇庆市城市规划设计院 | Oblique photogrammetry building feature point extraction method and device, electronic equipment and medium |
CN114417489A (en) * | 2022-03-30 | 2022-04-29 | 宝略科技(浙江)有限公司 | Building base contour refinement extraction method based on real-scene three-dimensional model |
CN114413852A (en) * | 2022-01-13 | 2022-04-29 | 山东高速岩土工程有限公司 | Unmanned aerial vehicle auxiliary surveying and mapping method and system |
CN115439672A (en) * | 2022-11-04 | 2022-12-06 | 浙江大华技术股份有限公司 | Image matching method, illicit detection method, terminal device, and storage medium |
CN116468869A (en) * | 2023-06-20 | 2023-07-21 | 中色蓝图科技股份有限公司 | Live-action three-dimensional modeling method, equipment and medium based on remote sensing satellite image |
CN116863085A (en) * | 2023-09-04 | 2023-10-10 | 北京数慧时空信息技术有限公司 | Three-dimensional reconstruction system, three-dimensional reconstruction method, electronic equipment and storage medium |
CN117132737A (en) * | 2023-10-26 | 2023-11-28 | 自然资源部第三地理信息制图院 | Three-dimensional building model construction method, system and equipment |
CN117437364A (en) * | 2023-12-20 | 2024-01-23 | 深圳大学 | Method and device for extracting three-dimensional structure of building based on residual defect cloud data |
CN118640878A (en) * | 2024-08-16 | 2024-09-13 | 南昌航空大学 | Topography mapping method based on aviation mapping technology |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050213808A1 (en) * | 2004-03-29 | 2005-09-29 | Fumio Ohtomo | Survey data processing system, storage medium for storing electronic map, and electronic map display device |
US20070010924A1 (en) * | 2005-07-11 | 2007-01-11 | Kabushiki Kaisha Topcon | Geographic data collecting system |
CN103886610A (en) * | 2014-04-05 | 2014-06-25 | 东北电力大学 | Image type defect detecting method for insulator |
CN104036239A (en) * | 2014-05-29 | 2014-09-10 | 西安电子科技大学 | Fast high-resolution SAR (synthetic aperture radar) image ship detection method based on feature fusion and clustering |
CN104200428A (en) * | 2014-08-18 | 2014-12-10 | 南京信息工程大学 | Microscopic image color convolution removal method and cutting method based on non-negative matrix factorization (NMF) |
CN106529431A (en) * | 2016-10-31 | 2017-03-22 | 武汉大学 | Road boundary point automatic extracting and vectorizing method based on on-vehicle laser scanning data |
CN107067394A (en) * | 2017-04-18 | 2017-08-18 | 中国电子科技集团公司电子科学研究院 | A kind of oblique photograph obtains the method and device of point cloud coordinate |
CN108171720A (en) * | 2018-01-08 | 2018-06-15 | 武汉理工大学 | A kind of oblique photograph model object frontier probe method based on geometrical statistic information |
CN108871285A (en) * | 2018-08-22 | 2018-11-23 | 上海华测导航技术股份有限公司 | Unmanned plane oblique photograph measuring system in planing final construction datum |
CN109827548A (en) * | 2019-02-28 | 2019-05-31 | 华南机械制造有限公司 | The processing method of aerial survey of unmanned aerial vehicle data |
CN110110727A (en) * | 2019-06-18 | 2019-08-09 | 南京景三医疗科技有限公司 | The image partition method post-processed based on condition random field and Bayes |
CN110232419A (en) * | 2019-06-20 | 2019-09-13 | 东北大学 | A kind of method of side slope rock category automatic identification |
CN110288700A (en) * | 2019-06-26 | 2019-09-27 | 东北大学 | A kind of slope structure face of rock quality is grouped automatically and displacement prediction method |
-
2019
- 2019-10-15 CN CN201910975930.1A patent/CN110866531A/en active Pending
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050213808A1 (en) * | 2004-03-29 | 2005-09-29 | Fumio Ohtomo | Survey data processing system, storage medium for storing electronic map, and electronic map display device |
US20070010924A1 (en) * | 2005-07-11 | 2007-01-11 | Kabushiki Kaisha Topcon | Geographic data collecting system |
CN103886610A (en) * | 2014-04-05 | 2014-06-25 | 东北电力大学 | Image type defect detecting method for insulator |
CN104036239A (en) * | 2014-05-29 | 2014-09-10 | 西安电子科技大学 | Fast high-resolution SAR (synthetic aperture radar) image ship detection method based on feature fusion and clustering |
CN104200428A (en) * | 2014-08-18 | 2014-12-10 | 南京信息工程大学 | Microscopic image color convolution removal method and cutting method based on non-negative matrix factorization (NMF) |
CN106529431A (en) * | 2016-10-31 | 2017-03-22 | 武汉大学 | Road boundary point automatic extracting and vectorizing method based on on-vehicle laser scanning data |
CN107067394A (en) * | 2017-04-18 | 2017-08-18 | 中国电子科技集团公司电子科学研究院 | A kind of oblique photograph obtains the method and device of point cloud coordinate |
CN108171720A (en) * | 2018-01-08 | 2018-06-15 | 武汉理工大学 | A kind of oblique photograph model object frontier probe method based on geometrical statistic information |
CN108871285A (en) * | 2018-08-22 | 2018-11-23 | 上海华测导航技术股份有限公司 | Unmanned plane oblique photograph measuring system in planing final construction datum |
CN109827548A (en) * | 2019-02-28 | 2019-05-31 | 华南机械制造有限公司 | The processing method of aerial survey of unmanned aerial vehicle data |
CN110110727A (en) * | 2019-06-18 | 2019-08-09 | 南京景三医疗科技有限公司 | The image partition method post-processed based on condition random field and Bayes |
CN110232419A (en) * | 2019-06-20 | 2019-09-13 | 东北大学 | A kind of method of side slope rock category automatic identification |
CN110288700A (en) * | 2019-06-26 | 2019-09-27 | 东北大学 | A kind of slope structure face of rock quality is grouped automatically and displacement prediction method |
Non-Patent Citations (1)
Title |
---|
王洪峰等: "倾斜摄影实景三维单体化模型自适应聚类算法", 《应用科技》, vol. 44, no. 2, pages 35 - 39 * |
Cited By (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111563448A (en) * | 2020-04-30 | 2020-08-21 | 北京百度网讯科技有限公司 | Method and device for detecting illegal building, electronic equipment and storage medium |
CN111750849A (en) * | 2020-06-05 | 2020-10-09 | 武汉大学 | Target contour positioning and attitude-fixing adjustment method and system under multiple visual angles |
CN112085778A (en) * | 2020-08-04 | 2020-12-15 | 广东国地规划科技股份有限公司 | Oblique photography illegal building detection method and system based on superpixels and morphology |
CN112461209A (en) * | 2021-02-01 | 2021-03-09 | 国科天成科技股份有限公司 | Double-light fusion system of visible light and infrared light |
CN112862966B (en) * | 2021-02-20 | 2024-01-26 | 中煤航测遥感集团有限公司 | Method, device, equipment and storage medium for constructing surface three-dimensional model |
CN112862966A (en) * | 2021-02-20 | 2021-05-28 | 中煤航测遥感集团有限公司 | Method, device and equipment for constructing three-dimensional model of earth surface and storage medium |
CN113516777A (en) * | 2021-05-13 | 2021-10-19 | 天讯方舟(北京)信息科技有限公司 | Three-dimensional automatic modeling and visualization method for urban building |
CN113650783A (en) * | 2021-07-08 | 2021-11-16 | 江苏省地质测绘院 | Fixed wing oblique photography cadastral mapping method, system and equipment |
CN113674293A (en) * | 2021-08-20 | 2021-11-19 | 建信金融科技有限责任公司 | Picture processing method and device, electronic equipment and computer readable medium |
CN114120149A (en) * | 2021-11-09 | 2022-03-01 | 肇庆市城市规划设计院 | Oblique photogrammetry building feature point extraction method and device, electronic equipment and medium |
CN114120149B (en) * | 2021-11-09 | 2022-07-12 | 肇庆市城市规划设计院 | Oblique photogrammetry building feature point extraction method and device, electronic equipment and medium |
CN114413852B (en) * | 2022-01-13 | 2023-10-03 | 山东高速岩土工程有限公司 | Unmanned aerial vehicle auxiliary mapping method and system |
CN114413852A (en) * | 2022-01-13 | 2022-04-29 | 山东高速岩土工程有限公司 | Unmanned aerial vehicle auxiliary surveying and mapping method and system |
CN114417489B (en) * | 2022-03-30 | 2022-07-19 | 宝略科技(浙江)有限公司 | Building base contour refinement extraction method based on real-scene three-dimensional model |
CN114417489A (en) * | 2022-03-30 | 2022-04-29 | 宝略科技(浙江)有限公司 | Building base contour refinement extraction method based on real-scene three-dimensional model |
CN115439672A (en) * | 2022-11-04 | 2022-12-06 | 浙江大华技术股份有限公司 | Image matching method, illicit detection method, terminal device, and storage medium |
CN116468869A (en) * | 2023-06-20 | 2023-07-21 | 中色蓝图科技股份有限公司 | Live-action three-dimensional modeling method, equipment and medium based on remote sensing satellite image |
CN116863085A (en) * | 2023-09-04 | 2023-10-10 | 北京数慧时空信息技术有限公司 | Three-dimensional reconstruction system, three-dimensional reconstruction method, electronic equipment and storage medium |
CN116863085B (en) * | 2023-09-04 | 2024-01-09 | 北京数慧时空信息技术有限公司 | Three-dimensional reconstruction system, three-dimensional reconstruction method, electronic equipment and storage medium |
CN117132737A (en) * | 2023-10-26 | 2023-11-28 | 自然资源部第三地理信息制图院 | Three-dimensional building model construction method, system and equipment |
CN117132737B (en) * | 2023-10-26 | 2024-01-26 | 自然资源部第三地理信息制图院 | Three-dimensional building model construction method, system and equipment |
CN117437364A (en) * | 2023-12-20 | 2024-01-23 | 深圳大学 | Method and device for extracting three-dimensional structure of building based on residual defect cloud data |
CN117437364B (en) * | 2023-12-20 | 2024-04-26 | 深圳大学 | Method and device for extracting three-dimensional structure of building based on residual defect cloud data |
CN118640878A (en) * | 2024-08-16 | 2024-09-13 | 南昌航空大学 | Topography mapping method based on aviation mapping technology |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110866531A (en) | Building feature extraction method and system based on three-dimensional modeling and storage medium | |
US7983474B2 (en) | Geospatial modeling system and related method using multiple sources of geographic information | |
CN106327532B (en) | A kind of three-dimensional registration method of single image | |
Xu et al. | Reconstruction of scaffolds from a photogrammetric point cloud of construction sites using a novel 3D local feature descriptor | |
CN109598794B (en) | Construction method of three-dimensional GIS dynamic model | |
WO2018061010A1 (en) | Point cloud transforming in large-scale urban modelling | |
CN111323788B (en) | Building change monitoring method and device and computer equipment | |
CN113012063B (en) | Dynamic point cloud repairing method and device and computer equipment | |
JP4058293B2 (en) | Generation method of high-precision city model using laser scanner data and aerial photograph image, generation system of high-precision city model, and program for generation of high-precision city model | |
KR100904078B1 (en) | A system and a method for generating 3-dimensional spatial information using aerial photographs of image matching | |
CA2684893A1 (en) | Geospatial modeling system providing data thinning of geospatial data points and related methods | |
CN111458691B (en) | Building information extraction method and device and computer equipment | |
KR101079475B1 (en) | A system for generating 3-dimensional urban spatial information using point cloud filtering | |
KR101079531B1 (en) | A system for generating road layer using point cloud data | |
CN117367404A (en) | Visual positioning mapping method and system based on SLAM (sequential localization and mapping) in dynamic scene | |
CN118247429A (en) | Air-ground cooperative rapid three-dimensional modeling method and system | |
CN116051980B (en) | Building identification method, system, electronic equipment and medium based on oblique photography | |
CN114758087B (en) | Method and device for constructing urban information model | |
KR20130002244A (en) | A system for generating urban spatial information using building data selected according to level information of urban spatial information model | |
CN114943711A (en) | Building extraction method and system based on LiDAR point cloud and image | |
KR101114904B1 (en) | A system and method for generating urban spatial information using a draft map and an aerial laser measurement data | |
KR101079359B1 (en) | A system for generating digital map using an aerial photograph and aerial light detection of ranging data | |
KR101103491B1 (en) | A system and method for generating road layer using an aerial light detection and ranging data | |
KR101083902B1 (en) | A system for generating 3-dimensional spatial information using an aerial lidar surveying data | |
KR20120138606A (en) | A system for generating road layer using continuity analysis of ground data |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200306 |