[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN114120149B - Oblique photogrammetry building feature point extraction method and device, electronic equipment and medium - Google Patents

Oblique photogrammetry building feature point extraction method and device, electronic equipment and medium Download PDF

Info

Publication number
CN114120149B
CN114120149B CN202111322687.7A CN202111322687A CN114120149B CN 114120149 B CN114120149 B CN 114120149B CN 202111322687 A CN202111322687 A CN 202111322687A CN 114120149 B CN114120149 B CN 114120149B
Authority
CN
China
Prior art keywords
dimensional model
building
filled
region
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111322687.7A
Other languages
Chinese (zh)
Other versions
CN114120149A (en
Inventor
李健斌
吴文海
陈国锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhaoqing Urban Planning And Design Institute
Original Assignee
Zhaoqing Urban Planning And Design Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhaoqing Urban Planning And Design Institute filed Critical Zhaoqing Urban Planning And Design Institute
Priority to CN202111322687.7A priority Critical patent/CN114120149B/en
Publication of CN114120149A publication Critical patent/CN114120149A/en
Application granted granted Critical
Publication of CN114120149B publication Critical patent/CN114120149B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • G06F18/24155Bayesian classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/10Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The present application relates to the field of surveying and mapping, and in particular, to a method, an apparatus, an electronic device, and a medium for extracting feature points of a building through oblique photogrammetry. The method comprises the following steps: a method for extracting characteristic points of a building through oblique photogrammetry comprises the following steps: acquiring a three-dimensional model generated by the target building based on oblique photogrammetry; determining a region to be filled in the three-dimensional model; determining feature points in the region to be filled based on a Bayesian network classifier; and fusing the characteristic points of the area to be filled to the position corresponding to the three-dimensional model. The accuracy of extracting the building is improved.

Description

Oblique photogrammetry building feature point extraction method and device, electronic equipment and medium
Technical Field
The present application relates to the field of surveying and mapping, and in particular, to a method and an apparatus for extracting feature points of a building through oblique photogrammetry, an electronic device, and a medium.
Background
In recent years, with the rapid development of unmanned aerial vehicle technology and oblique photography technology, oblique photography is performed by using an unmanned aerial vehicle equipped with a multi-lens sensor, and a real-scene three-dimensional model of a city is rapidly generated, which becomes an important means for acquiring three-dimensional geographic information data. Relevant personnel optimize unmanned aerial vehicle flight route through constantly improving the machine carries camera performance, obtain more detailed ground feature information, generate the building three-dimensional model that becomes more meticulous.
But due to the restriction of the characteristics of the triangulation network, the fitting effect of the triangulation network on the surface is good, but the linear target with non-smooth outline and the salient angular points of the building are difficult to accurately fit, and the feature extraction accuracy of the building is reduced.
Disclosure of Invention
In order to accurately extract the building features, the application provides a method and a device for extracting the feature points of the building through oblique photogrammetry, an electronic device and a medium.
In a first aspect, the present application provides a method for extracting feature points of a building through oblique photogrammetry, which adopts the following technical scheme:
a method for extracting feature points of a building through oblique photogrammetry comprises the following steps:
acquiring a three-dimensional model generated by the target building based on oblique photogrammetry;
determining a region to be filled in the three-dimensional model;
determining feature points in the region to be filled based on a Bayesian network classifier;
and fusing the characteristic points of the area to be filled to the position corresponding to the three-dimensional model.
By adopting the technical scheme, the characteristic points with poor fitting effect in the three-dimensional model are extracted by adopting the Bayesian classifier, and the extracted characteristic points are filled to the position corresponding to the three-dimensional model, so that the characteristic points of the building can be effectively identified at the position with low fitting accuracy of the triangulation network, and the accuracy of extracting the building is improved.
In another possible implementation manner, the method further includes:
determining an orthoscopic contour line of the building;
taking the characteristic points in the region to be filled as filling characteristic points;
judging whether the filling characteristic points positioned at the top of the three-dimensional model fall into an orthoscopic contour line of the building or not;
accuracy information is generated based on the determination result.
By adopting the technical scheme, the accuracy of the generated filling feature points can be judged by judging whether the feature points positioned at the top of the three-dimensional model fall into the orthoscopic contour line of the building or not, and the accuracy of extracting the feature points of the building is improved.
In another possible implementation manner, the generating accuracy information based on the determination result includes:
determining a deviation vector between the filled feature points at the top of the three-dimensional model and an orthonormal contour line of the building;
generating accuracy information based on the deviation vector.
By adopting the technical scheme, whether the characteristic point at the top of the three-dimensional model falls into the contour line of the building or not is judged by judging the mode of the deviation vector, and the offset of the filled characteristic point can be further extracted.
In another possible implementation manner, the determining a region to be filled in the three-dimensional model includes:
and inputting the three-dimensional model into a trained neural network model, and acquiring a region to be filled generated by the trained neural network model.
By adopting the technical scheme, the area to be filled of the three-dimensional model to be detected is automatically extracted through the neural network model in an unsupervised learning mode, and the accuracy of identifying the characteristic points to be supplemented is realized.
In another possible implementation manner, training the neural network model includes:
acquiring a sample three-dimensional model and artificial marking information corresponding to the sample three-dimensional model;
and taking the sample three-dimensional model and the artificial labeling information as training samples to train the neural network model.
In another possible implementation manner, before the acquiring the three-dimensional model generated by the target building based on oblique photogrammetry, the method further includes: acquiring point cloud data of a target building, and denoising the point cloud data of the target building.
In a second aspect, the present application provides an oblique photogrammetry building feature point extraction device, which adopts the following technical scheme:
an oblique photogrammetry building feature point extraction device, comprising:
the building module is used for obtaining a three-dimensional model generated by the target building based on oblique photogrammetry;
the region division module is used for determining a region to be filled in the three-dimensional model;
the characteristic re-identification module is used for determining characteristic points in the region to be filled based on a Bayesian network classifier;
and the fusion module is used for fusing the characteristic points of the area to be filled to the position corresponding to the three-dimensional model.
In another possible implementation manner, the extraction apparatus further includes an alignment module, where the alignment module is configured to:
determining an orthoscopic contour line of the building;
taking the characteristic points in the region to be filled as filling characteristic points;
judging whether the filling characteristic points positioned at the top of the three-dimensional model fall into an orthoscopic contour line of the building or not;
accuracy information is generated based on the determination result.
In another possible implementation manner, when the comparison module generates the accuracy information based on the determination result, the comparison module is specifically configured to:
determining a deviation vector between the filled feature points at the top of the three-dimensional model and an orthonormal contour line of the building;
generating accuracy information based on the deviation vector.
In another possible implementation manner, when determining the region to be filled in the three-dimensional model, the region dividing module is specifically configured to:
and inputting the three-dimensional model into a trained neural network model, and acquiring a region to be filled generated by the trained neural network model.
In another possible implementation manner, when training the neural network model, the region division module is specifically configured to:
acquiring a sample three-dimensional model and artificial marking information corresponding to the sample three-dimensional model;
and taking the sample three-dimensional model and the artificial labeling information as training samples to train the neural network model.
In another possible implementation manner, before the obtaining of the three-dimensional model of the target building generated based on oblique photogrammetry, the construction module further includes: acquiring point cloud data of a target building, and denoising the point cloud data of the target building.
In a third aspect, the present application provides an electronic device, which adopts the following technical solutions:
an electronic device, comprising:
one or more processors;
a memory;
one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more applications configured to: the oblique photogrammetry building feature point extraction method is executed.
In a fourth aspect, the present application provides a computer-readable storage medium, which adopts the following technical solutions:
a computer-readable storage medium, comprising: a computer program is stored which can be loaded by a processor and which implements the above-described oblique photogrammetry building feature point extraction method.
In summary, the present application includes at least one of the following beneficial technical effects:
the characteristic points with poor fitting effect in the three-dimensional model are extracted by the Bayesian classifier, and the extracted characteristic points are filled to the position corresponding to the three-dimensional model, so that the characteristic points of the building can be effectively identified at the position with low fitting accuracy of the triangulation network, and the accuracy of extracting the building is improved.
Drawings
FIG. 1 is a schematic flow chart of a method for extracting feature points of a building through oblique photogrammetry according to an embodiment of the application;
FIG. 2 is a block diagram of an oblique photogrammetry building feature point extraction apparatus according to an embodiment of the present application;
fig. 3 is a schematic diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The present application is described in further detail below with reference to figures 1-3.
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In addition, the term "and/or" herein is only one kind of association relationship describing an associated object, and means that there may be three kinds of relationships, for example, a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship, unless otherwise specified.
The embodiment of the application provides a method for extracting characteristic points of a building through oblique photogrammetry, which is executed by electronic equipment and comprises the following steps:
step S101, a three-dimensional model generated by the target building based on oblique photogrammetry is obtained.
The oblique photogrammetry technology is a high and new technology emerging and developed in the field of international surveying and mapping remote sensing, integrates the traditional aerial photography and close-range surveying technology, breaks through the limitation that an orthoimage can only be shot from a vertical angle in the past, and acquires images from 5 different angles including vertical, forward-looking, left-looking, right-looking and back-looking by carrying a single or a plurality of lenses on the same flight platform. Wherein, the vertical photographic image can be processed by the traditional aerial photogrammetry technology to manufacture 4D (DEM, DOM, DLG and DRG) products; the front-view, left-view, right-view and rear-view 4 oblique photographic images are used for acquiring rich texture information of the side surface of the ground feature. The oblique photography is the camera attitude adjustment in the camera array combination which is carried out on different sides of more ground objects except the top of the multi-angle photography, and 360-degree photography is carried out on a target building.
For the embodiment of the application, the target building is a single building, the target building is obtained by identifying the single building in a building group through feature clustering and segmentation, and any one of the single buildings is determined as the target building.
Before generating the three-dimensional model, firstly preprocessing the acquired image, checking and eliminating the image which does not meet the requirement, carrying out image color homogenizing treatment, eliminating irrelevant information in the image and the like;
the process of generating a three-dimensional model includes: and performing image geometric correction on the preprocessed image, wherein the image geometric correction comprises the following steps: processing such as image matrix conversion, determination of a correction transformation function, analysis of errors, left transformation, image resampling and the like; then, the corrected images are subjected to integral combined adjustment in the region, the geometric deformation and shielding relation among the images is eliminated, a pyramid matching strategy from coarse to fine is adopted, and homonymy point automatic matching and free net beam method adjustment are carried out on each level of images, so that a better homonymy point matching result is obtained; further, the multi-view images are densely matched, redundant information is fully considered in the matching process, and coordinates of the same-name points on the multi-view images are accurately acquired; then, generating a digital surface model and/or a three-dimensional irregular triangulation network, and correcting based on image space and geometric distortion to generate a multi-center projection plane orthoimage; and finally, generating a three-dimensional database, wherein the three-dimensional database comprises three-dimensional model data including DSM (digital surface model), DOM (document object model), TDOM (time difference of arrival) and vector data, and the three-dimensional model data including the DSM, the DOM, the TDOM and the vector data are three-dimensional model data generated after a TIN (triangulated irregular network) model of a ground feature is built based on the acquired ultrahigh-density point cloud of the image.
And S102, determining a region to be filled in the three-dimensional model.
Specifically, due to the restriction of the characteristics of the triangulation network, the fitting effect of the triangulation network on the surface is good, but accurate fitting is difficult for linear targets and salient corner points with non-flat outlines of buildings. For the embodiment of the application, the feature points corresponding to the linear targets with poor fitting effect in the triangulation network and the preset area where the salient corner points are located are used as the area to be filled.
And S103, determining the characteristic points in the region to be filled based on the Bayesian network classifier.
The Bayesian network classifier is constructed for the target building, and building feature points are obtained based on Monte Carlo algorithm reasoning. For each target building, constructing a Bayesian network classifier structure B (G, θ) in which the joint probability distribution of the attributes of the sample points is defined as formula (1):
Figure 409817DEST_PATH_IMAGE002
formula (1)
And (3) solving a distribution parameter theta of sample obedience for sample points in the Bayesian network classifier structure B (G, theta) based on an EM (Expectation-Maximization) algorithm, namely an Expectation-Maximization algorithm.
Specifically, if the distribution parameter θ obeyed by the sample is known, the expected value of the hidden variable z can be inferred according to the observed training sample, which specifically includes: deducing the hidden variable distribution P (z | x, θ (t)) with the current parameter θ (t), calculating the expectation of the log-likelihood function L (θ | x, z) with respect to z:
Figure DEST_PATH_IMAGE003
formula (2)
If the value of z is known, then the parameter sought maximizes the expected likelihood function value, i.e.:
Figure 780845DEST_PATH_IMAGE004
formula (3)
And (4) the point cloud where the characteristic point corresponding to the maximum expected distribution parameter theta is located is the characteristic point of the building.
And S104, fusing the characteristic points of the region to be filled to the position corresponding to the three-dimensional model.
And adding the characteristic points in the area to be filled into the three-dimensional model, so as to supplement the accurately fitted characteristic points in the three-dimensional model, and further realize accurate description of the whole building outline.
A possible implementation manner of the embodiment of the present application, before obtaining the three-dimensional model generated by the target building based on oblique photogrammetry, further includes: and acquiring point cloud data of the target building, and denoising the point cloud data of the target building.
The point cloud data denoising processing comprises filtering elevation abnormal points, each image corresponding to the target building corresponds to shooting time and corresponding shooting height information, and interference of invalid information can be reduced by filtering the elevation abnormal points.
In a possible implementation manner of the embodiment of the present application, the method further includes: determining an orthoscopic contour line of the building; taking the characteristic points in the region to be filled as filling characteristic points; judging whether the filling characteristic points positioned at the top of the three-dimensional model fall into an orthoscopic contour line of the building or not; accuracy information is generated based on the determination result.
Wherein determining the building outline comprises: determining a rough contour line of a mask of a target building; correcting the rough building contour line based on line matching; building boundaries are regularized by fusing the match lines and the outline lines of the coarse building mask. Specifically, the contour line of the mask (mask) of the target building is rough and irregular, a building model is constructed for semantic, the contour line of the rough building needs to be regularized, the purpose of regularizing the contour line of the building is to revise and describe the rough contour line of the building again, and an orthographic image possibly has the defects of distortion, shielding and the like in the generation process, so that the regularization of the contour line of the building is completed based on straight-line-segment information obtained by oblique photogrammetry. The rough building contour line extraction method may adopt the rough building contour line extraction based on the DIM point cloud, and the facade of the building is obtained by the projection point density of points (DoPP). Further, when the building boundary is regularized by fusing the match line and the contour line of the rough building mask, the method specifically includes: adjusting the direction of the rough building contour line extracted from the building mask to the main direction or the vertical direction of the building; merging the parallel straight line segments; and (4) completing the gaps existing among the merged straight line segments, and deleting the invalid polygons, thereby generating the regular contour line of the building.
A possible implementation manner of the embodiment of the application, generating the accuracy information based on the determination result, includes: determining a deviation vector between the filling characteristic point positioned at the top of the three-dimensional model and an orthoscopic contour line of a building; accuracy information is generated based on the deviation vector.
Specifically, whether the filling feature point falls into the contour line of the corresponding position of the building is judged, the vector difference (including the horizontal direction and the vertical direction) between the filling point and the corresponding point on the three-dimensional model is calculated, if the mode of the deviation vector is within a preset error range, effective inference information is generated, and if the mode of the deviation vector does not fall into the preset error range, ineffective inference information is generated. If the effective inference information is generated, it is indicated that the accuracy of the generated filling feature points is high, and the features of the building can be accurately described. If invalid inference information is generated, the generated filled feature points are low in accuracy, and the features of the building cannot be accurately described.
A possible implementation manner of the embodiment of the present application, determining a region to be filled in a three-dimensional model, includes: and inputting the three-dimensional model into the trained neural network model, and acquiring the region to be filled generated by the trained neural network model. Wherein training the neural network model comprises: acquiring a sample three-dimensional model and marking area information corresponding to the sample three-dimensional model; and training the neural network model by taking the three-dimensional model of the sample and the information of the marked area as training samples.
Specifically, the manual labeling information corresponding to the non-smooth linear target and the salient angle in the sample three-dimensional model is the manually labeled or corrected feature point, and the non-smooth linear target and the salient angle in the three-dimensional model to be detected are identified by identifying the labeling area of the non-smooth linear target and the salient angle, so that the automatic identification of the area to be filled in the three-dimensional model to be detected is achieved.
Referring to fig. 2, the above-described embodiment describes a method for extracting feature points of a tilt photogrammetry building from the perspective of a method flow, and the following embodiment describes a tilt photogrammetry building feature point extraction apparatus 100 from the perspective of a virtual module or a virtual unit, which will be described in detail in the following embodiments.
An oblique photogrammetric building feature point extraction device 100 comprising:
a building module 1001 for obtaining a three-dimensional model of a target building generated based on oblique photogrammetry;
the region dividing module 1002 is configured to determine a region to be filled in the three-dimensional model;
the feature re-identification module 1003 is configured to determine feature points in the region to be filled based on the bayesian network classifier;
and a fusion module 1004, configured to fuse the feature points of the region to be filled to a position corresponding to the three-dimensional model.
In another possible implementation manner, the extraction apparatus 100 further includes an alignment module, and the alignment module is configured to:
determining an orthoscopic contour line of the building;
taking the characteristic points in the region to be filled as filling characteristic points;
judging whether the filling characteristic points positioned at the top of the three-dimensional model fall into an orthoscopic contour line of the building or not;
accuracy information is generated based on the determination result.
In another possible implementation manner, when the comparison module generates the accuracy information based on the determination result, the comparison module is specifically configured to:
determining a deviation vector between a filling characteristic point positioned at the top of the three-dimensional model and an orthoscopic contour line of a building;
accuracy information is generated based on the deviation vector.
In another possible implementation manner, when determining the region to be filled in the three-dimensional model, the region dividing module 1002 is specifically configured to:
and inputting the three-dimensional model into the trained neural network model, and acquiring the region to be filled generated by the trained neural network model.
In another possible implementation manner, the region partitioning module 1002 is specifically configured to, when training the neural network model:
acquiring a sample three-dimensional model and artificial marking information corresponding to the sample three-dimensional model;
and training the neural network model by taking the sample three-dimensional model and the artificial labeling information as training samples.
In another possible implementation manner, before acquiring the three-dimensional model of the target building generated based on oblique photogrammetry, the construction module 1001 further includes: and acquiring point cloud data of the target building, and denoising the point cloud data of the target building.
The embodiment of the present application also introduces an electronic device from the perspective of a physical apparatus, as shown in fig. 3, an electronic device 1100 shown in fig. 3 includes: a processor 1101 and a memory 1103. The processor 1101 is coupled to the memory 1103, such as by a bus 1102. Optionally, the electronic device 1100 may also include a transceiver 1104. It should be noted that the transceiver 1104 is not limited to one in practical applications, and the structure of the electronic device 1100 is not limited to the embodiment of the present application.
The Processor 1101 may be a CPU (Central Processing Unit), a general purpose Processor, a DSP (Digital Signal Processor), an ASIC (Application Specific Integrated Circuit), an FPGA (Field Programmable Gate Array) or other Programmable logic device, a transistor logic device, a hardware component, or any combination thereof. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure. The processor 1101 may also be a combination of computing functions, e.g., comprising one or more microprocessors, DSPs and microprocessors, and the like.
Bus 1102 may include a path that transfers information between the above components. The bus 1102 may be a PCI (Peripheral Component Interconnect) bus, an EISA (Extended Industry Standard Architecture) bus, or the like. The bus 1102 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 3, but that does not indicate only one bus or one type of bus.
The Memory 1103 may be a ROM (Read Only Memory) or other type of static storage device that can store static information and instructions, a RAM (Random Access Memory) or other type of dynamic storage device that can store information and instructions, an EEPROM (Electrically Erasable Programmable Read Only Memory), a CD-ROM (Compact Disc Read Only Memory) or other optical Disc storage, optical Disc storage (including Compact Disc, laser Disc, optical Disc, digital versatile Disc, blu-ray Disc, etc.), a magnetic disk storage medium or other magnetic storage device, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to these.
The memory 1103 is used for storing application program codes for executing the present application, and the execution is controlled by the processor 1101. The processor 1101 is configured to execute application code stored in the memory 1103 to implement the content shown in the foregoing method embodiments.
Among them, electronic devices include but are not limited to: mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., car navigation terminals), and the like, and fixed terminals such as digital TVs, desktop computers, and the like. But also a server, etc. The electronic device shown in fig. 3 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least a portion of the steps in the flow chart of the figure may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
The foregoing is only a few embodiments of the present application and it should be noted that those skilled in the art can make various improvements and modifications without departing from the principle of the present application, and that these improvements and modifications should also be considered as the protection scope of the present application.

Claims (7)

1. A method for extracting characteristic points of a building through oblique photogrammetry is characterized by comprising the following steps:
acquiring a three-dimensional model generated by the target building based on oblique photogrammetry;
determining a region to be filled in the three-dimensional model; the region to be filled is a non-gentle linear target and a protruding angle in the three-dimensional model to be filled;
determining feature points in the region to be filled based on a Bayesian network classifier;
fusing the characteristic points of the region to be filled to the position corresponding to the three-dimensional model;
determining an orthoscopic contour line of the building;
taking the characteristic points in the region to be filled as filling characteristic points;
judging whether the filling characteristic points positioned at the top of the three-dimensional model fall into an orthoscopic contour line of the building or not;
generating accuracy information based on the determination result; the method specifically comprises the following steps:
determining a deviation vector between the filled feature points at the top of the three-dimensional model and an orthonormal contour line of the building;
generating accuracy information based on the deviation vector.
2. The method of claim 1, wherein the determining the region to be filled in the three-dimensional model comprises:
and inputting the three-dimensional model into a trained neural network model, and acquiring a region to be filled generated by the trained neural network model.
3. The method of claim 2, wherein training the neural network model comprises:
acquiring a sample three-dimensional model and artificial marking information corresponding to the sample three-dimensional model;
and taking the sample three-dimensional model and the artificial labeling information as training samples to train the neural network model.
4. The method of claim 1, wherein prior to obtaining the three-dimensional model of the target structure generated based on oblique photogrammetry, further comprising: acquiring point cloud data of a target building, and denoising the point cloud data of the target building.
5. An oblique photogrammetry building feature point extraction device, characterized by comprising:
the building module is used for obtaining a three-dimensional model of the target building generated based on oblique photogrammetry;
the region division module is used for determining a region to be filled in the three-dimensional model; the region to be filled is a non-gentle linear target and a protruding angle in the three-dimensional model to be filled;
the characteristic re-identification module is used for determining characteristic points in the region to be filled based on a Bayesian network classifier;
the fusion module is used for fusing the characteristic points of the area to be filled to the position corresponding to the three-dimensional model;
the orthoscopic contour line determining module is used for determining an orthoscopic contour line of the building;
a filling feature point determining module, configured to use the feature points in the region to be filled as filling feature points;
the judging module is used for judging whether the filling characteristic points positioned at the top of the three-dimensional model fall into the orthographic contour line of the building or not;
the accuracy information generation module is used for generating accuracy information based on the judgment result;
wherein, the accuracy information generation module comprises:
the deviation vector determination submodule is used for determining a deviation vector between the filling characteristic point positioned at the top of the three-dimensional model and an orthotropic contour line of the building;
and the accuracy information generation submodule is used for generating accuracy information based on the deviation vector.
6. An electronic device, comprising:
one or more processors;
a memory;
one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more applications configured to: performing the method of any one of claims 1 to 4.
7. A computer-readable storage medium, comprising: a computer program loadable by a processor and adapted to perform the method of any of claims 1 to 4.
CN202111322687.7A 2021-11-09 2021-11-09 Oblique photogrammetry building feature point extraction method and device, electronic equipment and medium Active CN114120149B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111322687.7A CN114120149B (en) 2021-11-09 2021-11-09 Oblique photogrammetry building feature point extraction method and device, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111322687.7A CN114120149B (en) 2021-11-09 2021-11-09 Oblique photogrammetry building feature point extraction method and device, electronic equipment and medium

Publications (2)

Publication Number Publication Date
CN114120149A CN114120149A (en) 2022-03-01
CN114120149B true CN114120149B (en) 2022-07-12

Family

ID=80377759

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111322687.7A Active CN114120149B (en) 2021-11-09 2021-11-09 Oblique photogrammetry building feature point extraction method and device, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN114120149B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114510772B (en) * 2022-04-08 2022-06-17 北京飞渡科技有限公司 Rapid generation method of building surface contour based on oblique scanning data
CN116051980B (en) * 2022-12-13 2024-02-09 北京乾图科技有限公司 Building identification method, system, electronic equipment and medium based on oblique photography
CN117036393B (en) * 2023-10-09 2023-12-19 中国电建集团昆明勘测设计研究院有限公司 Building contour extraction method based on inclined live-action three-dimensional model

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109186551A (en) * 2018-08-08 2019-01-11 广州市城市规划勘测设计研究院 Oblique photograph measures building feature point extracting method, device and storage medium
CN110866531A (en) * 2019-10-15 2020-03-06 深圳新视达视讯工程有限公司 Building feature extraction method and system based on three-dimensional modeling and storage medium
CN111047698A (en) * 2019-12-17 2020-04-21 中煤航测遥感集团有限公司 Real projective image acquisition method
CN113379684A (en) * 2021-05-24 2021-09-10 武汉港迪智能技术有限公司 Container corner line positioning and automatic container landing method based on video

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9609307B1 (en) * 2015-09-17 2017-03-28 Legend3D, Inc. Method of converting 2D video to 3D video using machine learning
CN109993783B (en) * 2019-03-25 2020-10-27 北京航空航天大学 Roof and side surface optimization reconstruction method for complex three-dimensional building point cloud
CN110866973B (en) * 2019-08-23 2023-04-18 上海交通大学 Aerospace reservoir and shore integrated emergency survey data fusion and integrated display system and method
US11227446B2 (en) * 2019-09-27 2022-01-18 Apple Inc. Systems, methods, and graphical user interfaces for modeling, measuring, and drawing using augmented reality

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109186551A (en) * 2018-08-08 2019-01-11 广州市城市规划勘测设计研究院 Oblique photograph measures building feature point extracting method, device and storage medium
CN110866531A (en) * 2019-10-15 2020-03-06 深圳新视达视讯工程有限公司 Building feature extraction method and system based on three-dimensional modeling and storage medium
CN111047698A (en) * 2019-12-17 2020-04-21 中煤航测遥感集团有限公司 Real projective image acquisition method
CN113379684A (en) * 2021-05-24 2021-09-10 武汉港迪智能技术有限公司 Container corner line positioning and automatic container landing method based on video

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Extracting 3D urban models from oblique aerial images;Norbert Haala et al.;《2015 Joint Urban Remote Sensing Event (JURSE)》;20150611;全文 *
倾斜航空影像提取建筑物关键技术研究;董友强;《中国优秀博硕士学位论文全文数据库(博士)基础科学辑》;20200915;全文 *

Also Published As

Publication number Publication date
CN114120149A (en) 2022-03-01

Similar Documents

Publication Publication Date Title
CN114120149B (en) Oblique photogrammetry building feature point extraction method and device, electronic equipment and medium
CN112927370B (en) Three-dimensional building model construction method and device, electronic equipment and storage medium
US20200134866A1 (en) Position estimation system and position estimation method
CN112444242B (en) Pose optimization method and device
EP2662804B1 (en) Method and apparatus for detecting continuous road partition
CN114612665B (en) Pose estimation and dynamic vehicle detection method based on normal vector histogram features
CN115797454B (en) Multi-camera fusion sensing method and device under bird's eye view angle
CN112862890B (en) Road gradient prediction method, device and storage medium
US20240029448A1 (en) Parking space detection method, apparatus, device and storage medium
WO2024012211A1 (en) Autonomous-driving environmental perception method, medium and vehicle
CN114730472A (en) Calibration method for external parameters of vehicle-mounted camera and related device
CN113177974A (en) Point cloud registration method and device, electronic equipment and storage medium
US11373328B2 (en) Method, device and storage medium for positioning object
CN116844124A (en) Three-dimensional object detection frame labeling method, three-dimensional object detection frame labeling device, electronic equipment and storage medium
CN112836698A (en) Positioning method, positioning device, storage medium and electronic equipment
CN112219225A (en) Positioning method, system and movable platform
CN113673288B (en) Idle parking space detection method and device, computer equipment and storage medium
CN116246033A (en) Rapid semantic map construction method for unstructured road
CN111951337B (en) Image detection target space positioning method and system
CN115507815A (en) Target ranging method and device and vehicle
CN117011481A (en) Method and device for constructing three-dimensional map, electronic equipment and storage medium
CN112767477A (en) Positioning method, positioning device, storage medium and electronic equipment
CN115131273A (en) Information processing method, ranging method and device
CN114004957A (en) Augmented reality picture generation method, device, equipment and storage medium
CN118397588B (en) Camera scene analysis method, system, equipment and medium for intelligent driving automobile

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant