CN112598668A - Defect identification method and device based on three-dimensional image and electronic equipment - Google Patents
Defect identification method and device based on three-dimensional image and electronic equipment Download PDFInfo
- Publication number
- CN112598668A CN112598668A CN202110226697.4A CN202110226697A CN112598668A CN 112598668 A CN112598668 A CN 112598668A CN 202110226697 A CN202110226697 A CN 202110226697A CN 112598668 A CN112598668 A CN 112598668A
- Authority
- CN
- China
- Prior art keywords
- target object
- point cloud
- cloud data
- dimensional
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 49
- 230000007547 defect Effects 0.000 title claims abstract description 43
- 238000004364 calculation method Methods 0.000 claims abstract description 31
- 239000011159 matrix material Substances 0.000 claims abstract description 30
- 230000000007 visual effect Effects 0.000 claims description 32
- 238000004891 communication Methods 0.000 claims description 6
- 230000011218 segmentation Effects 0.000 claims description 6
- 238000012545 processing Methods 0.000 abstract description 9
- 238000004590 computer program Methods 0.000 description 13
- 230000004913 activation Effects 0.000 description 9
- 238000010586 diagram Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 7
- 238000011156 evaluation Methods 0.000 description 5
- 238000001514 detection method Methods 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 238000011161 development Methods 0.000 description 3
- 230000018109 developmental process Effects 0.000 description 3
- 238000003708 edge detection Methods 0.000 description 3
- 230000001276 controlling effect Effects 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 239000003990 capacitor Substances 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000005284 excitation Effects 0.000 description 1
- 230000000977 initiatory effect Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000001105 regulatory effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/04—Indexing scheme for image data processing or generation, in general involving 3D image data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Quality & Reliability (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the disclosure provides a defect identification method, a device and electronic equipment based on a three-dimensional image, belonging to the technical field of data processing, wherein the method comprises the following steps: acquiring a three-dimensional point cloud data set shot by an unmanned aerial vehicle; determining a projection contour length sequence of n target objects in the horizontal direction based on the size of a three-dimensional target object of the target objects described by the three-dimensional point cloud data set to form a first feature vector; forming a second eigenvector based on eigenvalues of the n axial profile eigenvalue matrices; taking the first feature vector and the second feature vector as a horizontal vector and a vertical vector respectively to perform multiplication calculation to obtain a feature matrix of a target object in the three-dimensional point cloud data set; based on the results of the matching calculations, the type of target object and corresponding defects in the three-dimensional point cloud data set are determined. Through the treatment scheme of the present disclosure, the invaded foreign body can be effectively detected and monitored.
Description
Technical Field
The present disclosure relates to the field of data processing technologies, and in particular, to a defect identification method and apparatus based on a three-dimensional image, and an electronic device.
Background
With the rapid development of Chinese economy and the acceleration of urbanization process, railway transportation becomes an important transportation tool, and is closely related to daily travel of urban residents on commuting, tourism and business people.
The rail foreign matter detection means measures for checking foreign matters in a rail area to ensure the running safety of a train. According to the relevant regulations of train safe running in China, foreign matters which harm normal running of trains cannot be found in railway line safety protection areas and adjacent areas thereof, so that property loss and safety accidents caused by collision of foreign matters due to untimely braking and overlong braking distance can be avoided because drivers can see the foreign matters in a short distance and then brake.
In recent years, along with the continuous development of the national unmanned aerial vehicle industry, unmanned aerial vehicles are applied more and more, and rail foreign matter detection is carried out in real time through aerial images of the unmanned aerial vehicles, so that the development is long-standing.
How to get the quick discernment defect of image based on unmanned aerial vehicle aerial photograph, become the problem that needs to solve.
Disclosure of Invention
In view of this, the embodiments of the present disclosure provide a defect identification method and apparatus based on a three-dimensional image, and an unmanned aerial vehicle, so as to at least partially solve the problems in the prior art.
In a first aspect, an embodiment of the present disclosure provides a defect identification method based on a three-dimensional image, including:
acquiring a three-dimensional point cloud data set shot by an unmanned aerial vehicle;
determining a projection contour length sequence of n target objects in the horizontal direction based on the size of a three-dimensional target object of the target objects described by the three-dimensional point cloud data set to form a first feature vector;
sequentially and axially rotating for n times by taking a central point of the target object three-dimensional model as a center and 360/n degrees as a stepping angle in the vertical direction of the target object three-dimensional model to form n axial profile characteristic matrixes, and forming a second characteristic vector based on characteristic values of the n axial profile characteristic matrixes, wherein n is a preset segmentation numerical value;
taking the first feature vector and the second feature vector as a horizontal vector and a vertical vector respectively to perform multiplication calculation to obtain a feature matrix of a target object in the three-dimensional point cloud data set;
and performing matching calculation on the characteristic matrix and an existing model matrix in a model base, and determining the type and corresponding defects of the target object in the three-dimensional point cloud data set based on the result of the matching calculation.
According to a specific implementation manner of the embodiment of the present disclosure, the acquiring a three-dimensional point cloud data set shot by an unmanned aerial vehicle includes:
and acquiring a shot three-dimensional point cloud data set from the unmanned aerial vehicle end through a preset communication link channel.
According to a specific implementation manner of the embodiment of the present disclosure, the determining a length sequence of projection outlines of n target objects in a horizontal direction based on a size of a three-dimensional target object of the target objects described by the three-dimensional point cloud data set includes:
acquiring a central point of the projection profile in the horizontal direction and a longest connecting line of the projection profile on the central point;
dividing the longest connecting line into n +1 equal parts, and calculating the heights of n equal division points in the vertical direction of the longest connecting line;
taking the heights of the n bisector points in the vertical direction of the longest connecting line as n elements in the first feature vector.
According to a specific implementation manner of the embodiment of the present disclosure, the forming n axial profile feature matrices by sequentially axially rotating n times with a center point of the target object three-dimensional model as a center and 360/n degrees as a stepping angle includes:
forming n axial profile surfaces by taking the vertical surface where the longest connecting line is positioned as an initial rotating surface and taking 360/n degrees as a stepping angle;
and forming n axial contour characteristic matrixes by taking the relative position coordinates of the n axial contour surfaces in the horizontal direction and the vertical direction as elements.
According to a specific implementation manner of the embodiment of the present disclosure, the determining the type and the corresponding defect of the target object in the three-dimensional point cloud data set based on the result of the matching calculation includes:
judging whether the matching value of the feature matrix and an existing model matrix in a model base is larger than a preset value or not;
if not, determining that the target object is a foreign object;
and determining the defect type corresponding to the foreign object based on the length of the longest connecting line of the foreign object.
According to a specific implementation manner of the embodiment of the disclosure, before the acquiring of the three-dimensional point cloud data set shot by the unmanned aerial vehicle, the method further includes
Controlling an unmanned aerial vehicle to carry out shooting flight operation on a target object existing on a preset route according to the preset route;
acquiring the current position information of the unmanned aerial vehicle in real time by using positioning equipment arranged on the unmanned aerial vehicle;
acquiring a target object on a preset route under the current position based on image acquisition equipment arranged above the unmanned aerial vehicle to form a first image;
acquiring a second image matched with current position information, and performing differential matching on the first image and the second image to form a third image;
and identifying a target in the third image to form a target object set, collecting point cloud data formed by each target object in the target object set in real time to form a point cloud data sequence, removing the point cloud data with a spatial included angle larger than a preset included angle in the point cloud data sequence, and forming a three-dimensional point cloud data set based on current position information.
According to a specific implementation manner of the embodiment of the present disclosure, the acquiring a target object on a preset route at a current position to form a first image includes:
respectively acquiring a left visual frequency frame and a right visual frequency frame by utilizing a left eye camera and a right eye camera on the image acquisition equipment;
calculating the depth value of the target object acquired under the current visual field based on the left visual frequency frame and the right visual frequency frame;
when the depth value is smaller than a preset depth value, discarding the left eye video frame and the right eye video frame acquired at the current moment;
and when the depth value is larger than a preset depth value, generating the first image based on a left visual frequency frame and a right visual frequency frame.
According to a specific implementation manner of the embodiment of the present disclosure, the acquiring point cloud data formed by each target object in a target object set in real time includes:
acquiring the plane position coordinates of the identified target object in the third image;
determining a scanning angle of the laser radar based on the plane position coordinate and the current height value of the unmanned aerial vehicle;
performing radar data acquisition on the identified target object based on the scanning angle;
based on the collected radar data, point cloud data is formed relating to the identified target object.
In a second aspect, an embodiment of the present disclosure provides a defect identification apparatus based on a three-dimensional image, including:
the acquisition module acquires a three-dimensional point cloud data set shot by the unmanned aerial vehicle;
the determining module is used for determining a projection contour length sequence of n target objects in the horizontal direction based on the size of a three-dimensional target object of the target objects described by the three-dimensional point cloud data set to form a first feature vector;
the forming module is used for sequentially axially rotating for n times by taking a central point of the target object three-dimensional model as a center and 360/n degrees as a stepping angle in the vertical direction of the target object three-dimensional model to form n axial profile characteristic matrixes, and forming a second characteristic vector based on characteristic values of the n axial profile characteristic matrixes, wherein n is a preset segmentation numerical value;
the calculation module is used for performing multiplication calculation on the first characteristic vector and the second characteristic vector as a horizontal vector and a vertical vector respectively to obtain a characteristic matrix of a target object in the three-dimensional point cloud data set;
and the execution module is used for performing matching calculation on the characteristic matrix and an existing model matrix in a model base, and determining the type and corresponding defects of the target object in the three-dimensional point cloud data set based on the result of the matching calculation.
In a third aspect, an embodiment of the present disclosure further provides an electronic device, where the electronic device includes:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method for identifying defects based on three-dimensional images of the first aspect or any implementation manner of the first aspect.
In a fourth aspect, the disclosed embodiments also provide a non-transitory computer-readable storage medium storing computer instructions for causing a computer to execute the method for defect identification based on three-dimensional images in the first aspect or any implementation manner of the first aspect.
In a fifth aspect, the present disclosure also provides a computer program product, which includes a computer program stored on a non-transitory computer-readable storage medium, where the computer program includes program instructions, and when the program instructions are executed by a computer, the computer executes the three-dimensional image-based defect identification method in the foregoing first aspect or any implementation manner of the first aspect.
The defect identification scheme based on the three-dimensional image in the embodiment of the disclosure comprises the steps of obtaining a three-dimensional point cloud data set shot by an unmanned aerial vehicle; determining a projection contour length sequence of n target objects in the horizontal direction based on the size of a three-dimensional target object of the target objects described by the three-dimensional point cloud data set to form a first feature vector; sequentially and axially rotating for n times by taking a central point of the target object three-dimensional model as a center and 360/n degrees as a stepping angle in the vertical direction of the target object three-dimensional model to form n axial profile characteristic matrixes, and forming a second characteristic vector based on characteristic values of the n axial profile characteristic matrixes; taking the first feature vector and the second feature vector as a horizontal vector and a vertical vector respectively to perform multiplication calculation to obtain a feature matrix of a target object in the three-dimensional point cloud data set; and performing matching calculation on the characteristic matrix and an existing model matrix in a model base, and determining the type and corresponding defects of the target object in the three-dimensional point cloud data set based on the result of the matching calculation. By the processing scheme, the efficiency of defect identification based on the three-dimensional image is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings needed to be used in the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present disclosure, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart of a defect identification method based on a three-dimensional image according to an embodiment of the present disclosure;
FIG. 2 is a flowchart of another defect identification method based on three-dimensional images according to an embodiment of the present disclosure;
FIG. 3 is a flowchart of another defect identification method based on three-dimensional images according to an embodiment of the present disclosure;
FIG. 4 is a flowchart of another defect identification method based on three-dimensional images according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of a defect identification apparatus based on a three-dimensional image according to an embodiment of the present disclosure;
fig. 6 is a schematic diagram of an electronic device provided in an embodiment of the present disclosure.
Detailed Description
The embodiments of the present disclosure are described in detail below with reference to the accompanying drawings.
The embodiments of the present disclosure are described below with specific examples, and other advantages and effects of the present disclosure will be readily apparent to those skilled in the art from the disclosure in the specification. It is to be understood that the described embodiments are merely illustrative of some, and not restrictive, of the embodiments of the disclosure. The disclosure may be embodied or carried out in various other specific embodiments, and various modifications and changes may be made in the details within the description without departing from the spirit of the disclosure. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
It is noted that various aspects of the embodiments are described below within the scope of the appended claims. It should be apparent that the aspects described herein may be embodied in a wide variety of forms and that any specific structure and/or function described herein is merely illustrative. Based on the disclosure, one skilled in the art should appreciate that one aspect described herein may be implemented independently of any other aspects and that two or more of these aspects may be combined in various ways. For example, an apparatus may be implemented and/or a method practiced using any number of the aspects set forth herein. Additionally, such an apparatus may be implemented and/or such a method may be practiced using other structure and/or functionality in addition to one or more of the aspects set forth herein.
It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present disclosure, and the drawings only show the components related to the present disclosure rather than the number, shape and size of the components in actual implementation, and the type, amount and ratio of the components in actual implementation may be changed arbitrarily, and the layout of the components may be more complicated.
In addition, in the following description, specific details are provided to facilitate a thorough understanding of the examples. However, it will be understood by those skilled in the art that the aspects may be practiced without these specific details.
The embodiment of the disclosure provides a defect identification method based on a three-dimensional image. The defect identification method based on three-dimensional images provided by the embodiment can be executed by a computing device, the computing device can be implemented as software, or implemented as a combination of software and hardware, and the computing device can be integrally arranged in a server, a client and the like.
Referring to fig. 1, a defect identification method based on a three-dimensional image in an embodiment of the present disclosure may include the following steps:
s101, acquiring a three-dimensional point cloud data set shot by an unmanned aerial vehicle.
The unmanned aerial vehicle can be set to shoot defects according to a preset route, a three-dimensional point cloud data set can be obtained through shot data, and all target objects contained on the preset route (for example, a track route) are contained in the three-dimensional point cloud data set.
The point cloud data set contains spatial position information of the target object in a three-dimensional space, and therefore the target object shot by the unmanned aerial vehicle can be calculated and analyzed based on the three-dimensional point cloud data set.
S102, determining a projection contour length sequence of n target objects in the horizontal direction based on the size of a three-dimensional target object of the target objects described by the three-dimensional point cloud data set, and forming a first feature vector.
Specifically, a central point of the projection profile in the horizontal direction and a longest connecting line of the projection profile on the central point may be obtained, the longest connecting line is equally divided by n +1, heights of n equally divided points in the vertical direction of the longest connecting line are calculated, and finally the heights of the n equally divided points in the vertical direction of the longest connecting line are used as n elements in the first eigenvector, so as to obtain the first eigenvector.
S103, in the vertical direction of the target object three-dimensional model, sequentially and axially rotating for n times by taking the central point of the target object three-dimensional model as the center and 360/n degrees as a stepping angle to form n axial profile feature matrixes, and forming a second feature vector based on feature values of the n axial profile feature matrixes, wherein n is a preset segmentation numerical value.
Specifically, n axial profile surfaces can be formed by taking a vertical surface where the longest connecting line is located as an initial rotating surface and taking 360/n degrees as a stepping angle; and forming n axial contour characteristic matrixes by taking the relative position coordinates of the n axial contour surfaces in the horizontal direction and the vertical direction as elements. And finally obtaining a second eigenvector by calculating eigenvalues of the n eigenvectors.
And S104, taking the first feature vector and the second feature vector as a horizontal vector and a vertical vector respectively to perform multiplication calculation to obtain a feature matrix of the target object in the three-dimensional point cloud data set.
And S105, performing matching calculation on the feature matrix and the existing model matrix in the model base, and determining the type and the corresponding defects of the target object in the three-dimensional point cloud data set based on the result of the matching calculation.
Specifically, whether the matching value of the feature matrix and an existing model matrix in a model base is greater than a preset value or not can be judged; if not, determining that the target object is a foreign object; and determining the defect type corresponding to the foreign object based on the length of the longest connecting line of the foreign object.
By the method, the foreign object can be judged and detected quickly.
According to a specific implementation manner of the embodiment of the present disclosure, the acquiring a three-dimensional point cloud data set shot by an unmanned aerial vehicle includes: and acquiring a shot three-dimensional point cloud data set from the unmanned aerial vehicle end through a preset communication link channel.
Referring to fig. 2, according to a specific implementation manner of the embodiment of the present disclosure, the determining a length sequence of projection outlines of n target objects in a horizontal direction based on a size of a three-dimensional target object of the target objects described by the three-dimensional point cloud data set includes:
s201, acquiring a central point of the projection profile in the horizontal direction and a longest connecting line of the projection profile on the central point;
s202, dividing the longest connecting line into n +1 equal parts, and calculating the heights of n equal division points in the vertical direction of the longest connecting line;
and S203, taking the heights of the n equally-divided points in the vertical direction of the longest connecting line as n elements in the first feature vector.
Referring to fig. 3, according to a specific implementation manner of the embodiment of the present disclosure, the forming n axial profile feature matrices by sequentially axially rotating n times with a central point of the target object three-dimensional model as a center and 360/n degrees as a stepping angle includes:
s301, forming n axial profile surfaces by taking a vertical surface where the longest connecting line is located as an initial rotating surface and taking 360/n degrees as a stepping angle;
s302, relative position coordinates in the horizontal direction and the vertical direction of the n axial contour surfaces are taken as elements to form n axial contour characteristic matrixes.
Referring to fig. 4, according to a specific implementation manner of the embodiment of the present disclosure, the determining the type of the target object and the corresponding defect in the three-dimensional point cloud data set based on the result of the matching calculation includes:
s401, judging whether the matching value of the feature matrix and an existing model matrix in a model base is larger than a preset value or not;
s402, if not, determining that the target object is a foreign object;
s403, determining the defect type corresponding to the foreign object based on the length of the longest connecting line of the foreign object.
According to a specific implementation manner of the embodiment of the present disclosure, before the acquiring the three-dimensional point cloud data set shot by the unmanned aerial vehicle, the method further includes: controlling an unmanned aerial vehicle to carry out shooting flight operation on a target object existing on a preset route according to the preset route; acquiring the current position information of the unmanned aerial vehicle in real time by using positioning equipment arranged on the unmanned aerial vehicle; acquiring a target object on a preset route under the current position based on image acquisition equipment arranged above the unmanned aerial vehicle to form a first image; acquiring a second image matched with current position information, and performing differential matching on the first image and the second image to form a third image; and identifying a target in the third image to form a target object set, collecting point cloud data formed by each target object in the target object set in real time to form a point cloud data sequence, removing the point cloud data with a spatial included angle larger than a preset included angle in the point cloud data sequence, and forming a three-dimensional point cloud data set based on current position information.
According to a specific implementation manner of the embodiment of the present disclosure, the acquiring a target object on a preset route at a current position to form a first image includes:
respectively acquiring a left visual frequency frame and a right visual frequency frame by utilizing a left eye camera and a right eye camera on the image acquisition equipment;
calculating the depth value of the target object acquired under the current visual field based on the left visual frequency frame and the right visual frequency frame;
when the depth value is smaller than a preset depth value, discarding the left eye video frame and the right eye video frame acquired at the current moment;
and when the depth value is larger than a preset depth value, generating the first image based on a left visual frequency frame and a right visual frequency frame.
According to a specific implementation manner of the embodiment of the present disclosure, the acquiring point cloud data formed by each target object in a target object set in real time includes:
acquiring the plane position coordinates of the identified target object in the third image;
determining a scanning angle of the laser radar based on the plane position coordinate and the current height value of the unmanned aerial vehicle;
performing radar data acquisition on the identified target object based on the scanning angle;
based on the collected radar data, point cloud data is formed relating to the identified target object.
As an optional mode, the unmanned aerial vehicle may be controlled to perform shooting and flying operations on the target object existing on the preset route according to the preset route.
Unmanned aerial vehicle can be many rotor crafts, also can be the aircraft of other types, and unmanned aerial vehicle communicates through wired or wireless mode and ground control terminal, through one or more instructions that ground control terminal set up, unmanned aerial vehicle shoots the operation according to predetermineeing the route to predetermineeing the target. The preset route can be a railway track traffic route or other routes needing to be patrolled.
As an optional mode, the positioning device arranged on the unmanned aerial vehicle can be utilized to acquire the current position information of the unmanned aerial vehicle in real time.
The positioning device is used for acquiring the current position information of the unmanned aerial vehicle in real time, and comprises a GPS module and an RTK module, the current position information of the unmanned aerial vehicle can be acquired in real time through the GPS module and the RTK module, and whether the unmanned aerial vehicle is patrolled and examined on a preset route or not can be judged through acquiring the current position information.
As an alternative, the target object on the preset route at the current position may be acquired based on an image acquisition device disposed above the unmanned aerial vehicle, so as to form the first image.
Image acquisition equipment is used for gathering the image on patrolling and examining the route, image acquisition equipment set up in on the unmanned aerial vehicle for gather under the current position and form first image.
As one case, the image capturing apparatus is a binocular camera, and the image capturing apparatus includes a left eye camera for capturing left eye video frames based on a left eye angle and a right eye camera for capturing right eye video frames based on a right eye angle. The left eye camera and the right eye camera are spaced apart by a preset distance, so that the depth image information of the photographed target object can be determined based on the left eye camera and the right eye camera.
For this purpose, the image capturing apparatus is further provided with a calculating unit, the calculating unit calculates a depth value of the target object captured in the current field of view based on the left visual frequency frame and the right visual frequency frame, and the calculation of the depth value may be performed in various ways, which is not limited herein.
Whether a target object in an image is an object to be inspected can be judged by judging the depth value formed by the collected left eye video frame and the collected right eye video frame, and at the moment, when the depth value is smaller than a preset depth value, the left eye video frame and the right eye video frame collected at the current moment are abandoned, so that the occupation of system resources can be reduced; and when the depth value is larger than a preset depth value, generating an activation signal after generating the first image based on the left visual frequency frame and the right visual frequency frame so as to start the laser radar based on the activation signal. Through restart laser radar after the activation signal, can restart laser radar when needing laser radar to shoot to further save laser radar to the consumption of system resource.
As an alternative, a second image matched with the current position information may be acquired, and a third image may be formed after the first image and the second image are subjected to differential matching.
In order to further improve the efficiency of image processing, a storage unit may be disposed in the image capturing device, where the storage unit is configured to store in advance a video image related to the preset line, where the video image includes the second image, and the video image may be a video image of the preset line that is shot specially under the condition of no foreign object, and the video image includes location information of a shooting place of the video image. In the process of differential matching between the first image and the second image, the target object existing in the second image can be deleted from the first image in a target identification mode, and by means of the mode, the detection amount of the target object can be further reduced, so that the calculation of a system is reduced, and the target detection efficiency is improved.
As an optional mode, target identification may be performed on the third image to form a target object set, point cloud data formed by each target object in the target object set is collected in real time to form a point cloud data sequence, and after point cloud data with a spatial included angle larger than a preset included angle in the point cloud data sequence is removed, a three-dimensional point cloud data set is formed based on current position information.
And the laser radar further acquires point cloud data of the target object after detecting the target object. Specifically, the laser radar performs target identification on the third image to form a target object set, and by analyzing the objects in the target object set, it can be further determined whether foreign objects exist in the target object set.
Therefore, the laser radar can be used for collecting point cloud data formed by each target object in the target object set in real time to form a point cloud data sequence, wherein the point cloud data sequence is radar emission data of the target objects arranged according to a time sequence. The point cloud data includes spatial coordinate information of the target object determined based on the laser radar coordinates.
Due to the complexity of the environment, the point cloud data in the laser radar has noise data, and therefore, the noise data in the radar data needs to be rapidly filtered, so that the accuracy of the data in the point cloud data sequence is ensured.
Specifically, the position coordinates of any three continuous point cloud data in the point cloud data sequence can be obtained; forming two first line segments and two second line segments formed by two continuous point cloud data based on the position coordinates; and calculating an included angle between the first line segment and the second line segment to judge whether the spatial included angle in the point cloud data sequence is larger than a preset included angle or not, so that a three-dimensional point cloud data set is formed based on current position information after point cloud data with the spatial included angle larger than the preset included angle in the point cloud data sequence are removed.
Through the scheme in the embodiment, the data processing capacity is improved.
According to a specific implementation manner of the embodiment of the present disclosure, after the first image is generated based on the left visual frequency frame and the right visual frequency frame when the depth value is greater than the preset depth value, the method further includes:
an activation signal is generated to facilitate initiation of the lidar based on the activation signal.
According to a specific implementation manner of the embodiment of the present disclosure, before the acquiring a target object on a preset route at a current position based on an image acquisition device disposed above the unmanned aerial vehicle and forming a first image, the method further includes:
and in the image acquisition equipment of the unmanned aerial vehicle, storing the video images related to the preset line in advance, wherein the video images comprise the second images.
According to a specific implementation manner of the embodiment of the present disclosure, the performing target identification on the third image to form a target object set includes:
performing edge detection on the third image to form an edge detection result;
and searching a target object forming a closed curve in the edge detection result to form a target object set.
According to a specific implementation manner of the embodiment of the present disclosure, after removing point cloud data with a spatial included angle greater than a preset included angle in a point cloud data sequence, a three-dimensional point cloud data set is formed based on current position information, which includes:
acquiring position coordinates of any three continuous point cloud data in the point cloud data sequence;
forming two first line segments and two second line segments formed by two continuous point cloud data based on the position coordinates;
and calculating an included angle between the first line segment and the second line segment to judge whether the spatial included angle in the point cloud data sequence is larger than a preset included angle.
According to a specific implementation manner of the embodiment of the present disclosure, after the three-dimensional point cloud data set is formed based on the current position information, the method further includes:
and carrying out three-dimensional modeling on the point cloud data set, and judging whether foreign matters exist on the preset route or not by detecting the target objects existing after the three-dimensional modeling.
Besides, the unmanned aerial vehicle can also comprise positioning equipment, image acquisition equipment and a laser radar.
The image capturing apparatus includes:
a left eye camera to acquire a left eye video frame based on a left eye angle;
a right eye camera to acquire a right eye video frame based on a right eye angle;
the calculation unit is used for calculating the depth value of the target object acquired under the current visual field based on the left visual frequency frame and the right visual frequency frame;
when the depth value is smaller than a preset depth value, discarding the left eye video frame and the right eye video frame acquired at the current moment;
and when the depth value is larger than a preset depth value, generating an activation signal after generating the first image based on the left visual frequency frame and the right visual frequency frame so as to start the laser radar based on the activation signal.
According to a specific implementation manner of the embodiment of the present disclosure, the image capturing apparatus further includes:
and the image preprocessing module is used for carrying out image preprocessing operation on the acquired left eye video frame and the acquired right eye video frame.
According to a specific implementation manner of the embodiment of the present disclosure, the image capturing apparatus further includes:
and the storage unit is used for storing the video images related to the preset line in advance, and the video images comprise the second images.
According to a specific implementation manner of the embodiment of the present disclosure, the image capturing apparatus further includes:
a power supply circuit including a first line and a second line; the first line generates a first output voltage of a fixed pre-stabilized voltage, starts a core function circuit and supplies power to the second line; the second line generates a pre-regulated voltage second output voltage required by an actual circuit;
the first circuit comprises a pre-reference end, a first differential amplifier, a first transistor, a second transistor, a third transistor, a first resistor and a second resistor;
the reference end generates a pre-reference end and inputs the pre-reference end to the negative input end of the first differential amplifier, and the output end of the reference end is connected with the grid end of the third transistor;
the source end of the third transistor is grounded, and the drain end of the third transistor is connected with the drain end of the first transistor;
the grid end of the first transistor is connected with the grid end of the second transistor, the drain end of the second transistor is connected with the first end of the second resistor, the second end of the second resistor is connected with the first end of the first resistor, and the second end of the first resistor is grounded; the positive input of the first differential amplifier is connected with the first end of the first resistor.
According to a specific implementation manner of the embodiment of the present disclosure, the laser radar includes:
the power supply input end is used for receiving a second output voltage output by the power supply circuit;
a comparator having a first input terminal coupled to the laser power supply, a second input terminal coupled to an input terminal of a transimpedance amplifier via a capacitor, and an output terminal;
the second input end is coupled with the activation signal and used for starting the laser radar under the excitation of the activation signal;
a switching device having a control terminal coupled to the output terminal of the comparator, a first terminal coupled to a second voltage source, and a second terminal coupled to the input or output terminal of the transimpedance amplifier.
According to a specific implementation manner of the embodiment of the present disclosure, the laser radar further includes:
and the target recognition module is used for carrying out target recognition in the third image to form a target object set.
According to a specific implementation manner of the embodiment of the present disclosure, the lidar is further configured to:
acquiring the plane position coordinates of the identified target object in the third image;
determining a scanning angle of the laser radar based on the plane position coordinates and the current height value of the unmanned aerial vehicle;
performing radar data acquisition on the identified target object based on the scanning angle;
based on the collected radar data, point cloud data is formed relating to the identified target object.
According to a specific implementation manner of the embodiment of the present disclosure, the lidar is further configured to:
acquiring position coordinates of any three continuous point cloud data in the point cloud data sequence;
forming two first line segments and two second line segments formed by two continuous point cloud data based on the position coordinates;
and calculating an included angle between the first line segment and the second line segment to judge whether the spatial included angle in the point cloud data sequence is larger than a preset included angle.
The disclosed embodiments also provide a non-transitory computer-readable storage medium storing computer instructions for causing the computer to execute the three-dimensional image-based defect identification method in the foregoing method embodiments.
Corresponding to the above embodiment, referring to fig. 5, the present embodiment further discloses a defect identification apparatus 50 based on three-dimensional images, including:
the acquisition module 501 is used for acquiring a three-dimensional point cloud data set shot by an unmanned aerial vehicle;
a determining module 502, configured to determine a projection contour length sequence of n target objects in a horizontal direction based on a size of a three-dimensional target object of the target objects described by the three-dimensional point cloud data set, so as to form a first feature vector;
a forming module 503, configured to sequentially axially rotate n times in the vertical direction of the target object three-dimensional model by taking a central point of the target object three-dimensional model as a center and taking 360/n degrees as a stepping angle, so as to form n axial profile feature matrices, and form a second feature vector based on feature values of the n axial profile feature matrices, where n is a preset segmentation numerical value;
a calculating module 504, configured to perform multiplication calculation on the first feature vector and the second feature vector as a horizontal vector and a vertical vector, respectively, to obtain a feature matrix of a target object in the three-dimensional point cloud data set;
and the execution module 505 is configured to perform matching calculation on the feature matrix and a model matrix already existing in a model library, and determine the type and corresponding defects of the target object in the three-dimensional point cloud data set based on a result of the matching calculation.
For parts not described in detail in this embodiment, reference is made to the contents described in the above method embodiments, which are not described again here.
Referring to fig. 6, an embodiment of the present disclosure also provides an electronic device 60, including:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method for defect identification based on three-dimensional images of the method embodiments described above.
The disclosed embodiments also provide a computer program product comprising a computer program stored on a non-transitory computer readable storage medium, the computer program comprising program instructions which, when executed by a computer, cause the computer to perform the method for defect identification based on three-dimensional images in the aforementioned method embodiments.
Referring now to FIG. 6, a schematic diagram of an electronic device 60 suitable for use in implementing embodiments of the present disclosure is shown. The electronic devices in the embodiments of the present disclosure may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., car navigation terminals), and the like, and fixed terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 6, the electronic device 60 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 601 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM) 602 or a program loaded from a storage means 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the electronic apparatus 60 are also stored. The processing device 601, the ROM602, and the RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Generally, the following devices may be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touch pad, keyboard, mouse, image sensor, microphone, accelerometer, gyroscope, etc.; output devices 607 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 608 including, for example, tape, hard disk, etc.; and a communication device 609. The communication means 609 may allow the electronic device 60 to communicate with other devices wirelessly or by wire to exchange data. While the figures illustrate an electronic device 60 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 609, or may be installed from the storage means 608, or may be installed from the ROM 602. The computer program, when executed by the processing device 601, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring at least two internet protocol addresses; sending a node evaluation request comprising the at least two internet protocol addresses to node evaluation equipment, wherein the node evaluation equipment selects the internet protocol addresses from the at least two internet protocol addresses and returns the internet protocol addresses; receiving an internet protocol address returned by the node evaluation equipment; wherein the obtained internet protocol address indicates an edge node in the content distribution network.
Alternatively, the computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: receiving a node evaluation request comprising at least two internet protocol addresses; selecting an internet protocol address from the at least two internet protocol addresses; returning the selected internet protocol address; wherein the received internet protocol address indicates an edge node in the content distribution network.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of a unit does not in some cases constitute a limitation of the unit itself, for example, the first retrieving unit may also be described as a "unit for retrieving at least two internet protocol addresses".
It should be understood that portions of the present disclosure may be implemented in hardware, software, firmware, or a combination thereof.
The above description is only for the specific embodiments of the present disclosure, but the scope of the present disclosure is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present disclosure should be covered within the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.
Claims (10)
1. A defect identification method based on a three-dimensional image is characterized by comprising the following steps:
acquiring a three-dimensional point cloud data set shot by an unmanned aerial vehicle;
determining a projection contour length sequence of n target objects in the horizontal direction based on the size of a three-dimensional target object of the target objects described by the three-dimensional point cloud data set to form a first feature vector;
sequentially and axially rotating for n times by taking a central point of the target object three-dimensional model as a center and 360/n degrees as a stepping angle in the vertical direction of the target object three-dimensional model to form n axial profile characteristic matrixes, and forming a second characteristic vector based on characteristic values of the n axial profile characteristic matrixes, wherein n is a preset segmentation numerical value;
taking the first feature vector and the second feature vector as a horizontal vector and a vertical vector respectively to perform multiplication calculation to obtain a feature matrix of a target object in the three-dimensional point cloud data set;
and performing matching calculation on the characteristic matrix and an existing model matrix in a model base, and determining the type and corresponding defects of the target object in the three-dimensional point cloud data set based on the result of the matching calculation.
2. The method of claim 1, wherein the obtaining of the three-dimensional point cloud data set captured by the drone comprises:
and acquiring a shot three-dimensional point cloud data set from the unmanned aerial vehicle end through a preset communication link channel.
3. The method of claim 1, wherein determining a sequence of projection profile lengths of n target objects in a horizontal direction based on a size of a three-dimensional target object of the target objects described by the set of three-dimensional point cloud data comprises:
acquiring a central point of the projection profile in the horizontal direction and a longest connecting line of the projection profile on the central point;
dividing the longest connecting line into n +1 equal parts, and calculating the heights of n equal division points in the vertical direction of the longest connecting line;
taking the heights of the n bisector points in the vertical direction of the longest connecting line as n elements in the first feature vector.
4. The method of claim 3, wherein the sequentially rotating axially n times with a central point of the three-dimensional model of the target object as a center and 360/n degrees as a stepping angle to form n axial profile feature matrices comprises:
forming n axial profile surfaces by taking the vertical surface where the longest connecting line is positioned as an initial rotating surface and taking 360/n degrees as a stepping angle;
and forming n axial contour characteristic matrixes by taking the relative position coordinates of the n axial contour surfaces in the horizontal direction and the vertical direction as elements.
5. The method of claim 4, wherein determining the type of target object and corresponding defect in the three-dimensional point cloud data set based on the results of the matching calculations comprises:
judging whether the matching value of the feature matrix and an existing model matrix in a model base is larger than a preset value or not;
if not, determining that the target object is a foreign object;
and determining the defect type corresponding to the foreign object based on the length of the longest connecting line of the foreign object.
6. The method of claim 1, wherein prior to obtaining the drone captured three-dimensional point cloud data set, the method further comprises
Controlling an unmanned aerial vehicle to carry out shooting flight operation on a target object existing on a preset route according to the preset route;
acquiring the current position information of the unmanned aerial vehicle in real time by using positioning equipment arranged on the unmanned aerial vehicle;
acquiring a target object on a preset route under the current position based on image acquisition equipment arranged above the unmanned aerial vehicle to form a first image;
acquiring a second image matched with current position information, and performing differential matching on the first image and the second image to form a third image;
and identifying a target in the third image to form a target object set, collecting point cloud data formed by each target object in the target object set in real time to form a point cloud data sequence, removing the point cloud data with a spatial included angle larger than a preset included angle in the point cloud data sequence, and forming a three-dimensional point cloud data set based on current position information.
7. The method according to claim 6, wherein the acquiring the target object on the preset route at the current position to form a first image comprises:
respectively acquiring a left visual frequency frame and a right visual frequency frame by utilizing a left eye camera and a right eye camera on the image acquisition equipment;
calculating the depth value of the target object acquired under the current visual field based on the left visual frequency frame and the right visual frequency frame;
when the depth value is smaller than a preset depth value, discarding the left eye video frame and the right eye video frame acquired at the current moment;
and when the depth value is larger than a preset depth value, generating the first image based on a left visual frequency frame and a right visual frequency frame.
8. The method of claim 7, wherein the acquiring point cloud data formed by each target object in the set of target objects in real-time comprises:
acquiring the plane position coordinates of the identified target object in the third image;
determining a scanning angle of the laser radar based on the plane position coordinate and the current height value of the unmanned aerial vehicle;
performing radar data acquisition on the identified target object based on the scanning angle;
based on the collected radar data, point cloud data is formed relating to the identified target object.
9. A defect recognition apparatus based on a three-dimensional image, comprising:
the acquisition module acquires a three-dimensional point cloud data set shot by the unmanned aerial vehicle;
the determining module is used for determining a projection contour length sequence of n target objects in the horizontal direction based on the size of a three-dimensional target object of the target objects described by the three-dimensional point cloud data set to form a first feature vector;
the forming module is used for sequentially axially rotating for n times by taking a central point of the target object three-dimensional model as a center and 360/n degrees as a stepping angle in the vertical direction of the target object three-dimensional model to form n axial profile characteristic matrixes, and forming a second characteristic vector based on characteristic values of the n axial profile characteristic matrixes, wherein n is a preset segmentation numerical value;
the calculation module is used for performing multiplication calculation on the first characteristic vector and the second characteristic vector as a horizontal vector and a vertical vector respectively to obtain a characteristic matrix of a target object in the three-dimensional point cloud data set;
and the execution module is used for performing matching calculation on the characteristic matrix and an existing model matrix in a model base, and determining the type and corresponding defects of the target object in the three-dimensional point cloud data set based on the result of the matching calculation.
10. An electronic device, characterized in that the electronic device comprises:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110226697.4A CN112598668B (en) | 2021-03-02 | 2021-03-02 | Defect identification method and device based on three-dimensional image and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110226697.4A CN112598668B (en) | 2021-03-02 | 2021-03-02 | Defect identification method and device based on three-dimensional image and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112598668A true CN112598668A (en) | 2021-04-02 |
CN112598668B CN112598668B (en) | 2021-06-29 |
Family
ID=75207708
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110226697.4A Active CN112598668B (en) | 2021-03-02 | 2021-03-02 | Defect identification method and device based on three-dimensional image and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112598668B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112799422A (en) * | 2021-04-06 | 2021-05-14 | 众芯汉创(北京)科技有限公司 | Unmanned aerial vehicle flight control method and device for power inspection |
CN114882024A (en) * | 2022-07-07 | 2022-08-09 | 深圳市信润富联数字科技有限公司 | Target object defect detection method and device, electronic equipment and storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102003938A (en) * | 2010-10-11 | 2011-04-06 | 中国人民解放军信息工程大学 | Thermal state on-site detection method for large high-temperature forging |
US20140037194A1 (en) * | 2011-04-13 | 2014-02-06 | Unisantis Electronics Singapore Pte. Ltd. | Three-dimensional point cloud position data processing device, three-dimensional point cloud position data processing system, and three-dimensional point cloud position data processing method and program |
CN106053475A (en) * | 2016-05-24 | 2016-10-26 | 浙江工业大学 | Tunnel disease full-section dynamic rapid detection device based on active panoramic vision |
CN106872476A (en) * | 2017-03-31 | 2017-06-20 | 武汉理工大学 | A kind of casting class workpiece surface quality detection method and system based on line-structured light |
CN110574071A (en) * | 2017-01-27 | 2019-12-13 | Ucl商业有限公司 | Device, method and system for aligning 3D data sets |
CN110599449A (en) * | 2019-07-31 | 2019-12-20 | 众宏(上海)自动化股份有限公司 | Gear scanning algorithm for template matching and point cloud comparison |
-
2021
- 2021-03-02 CN CN202110226697.4A patent/CN112598668B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102003938A (en) * | 2010-10-11 | 2011-04-06 | 中国人民解放军信息工程大学 | Thermal state on-site detection method for large high-temperature forging |
US20140037194A1 (en) * | 2011-04-13 | 2014-02-06 | Unisantis Electronics Singapore Pte. Ltd. | Three-dimensional point cloud position data processing device, three-dimensional point cloud position data processing system, and three-dimensional point cloud position data processing method and program |
CN106053475A (en) * | 2016-05-24 | 2016-10-26 | 浙江工业大学 | Tunnel disease full-section dynamic rapid detection device based on active panoramic vision |
CN110574071A (en) * | 2017-01-27 | 2019-12-13 | Ucl商业有限公司 | Device, method and system for aligning 3D data sets |
CN106872476A (en) * | 2017-03-31 | 2017-06-20 | 武汉理工大学 | A kind of casting class workpiece surface quality detection method and system based on line-structured light |
CN110599449A (en) * | 2019-07-31 | 2019-12-20 | 众宏(上海)自动化股份有限公司 | Gear scanning algorithm for template matching and point cloud comparison |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112799422A (en) * | 2021-04-06 | 2021-05-14 | 众芯汉创(北京)科技有限公司 | Unmanned aerial vehicle flight control method and device for power inspection |
CN112799422B (en) * | 2021-04-06 | 2021-07-13 | 国网江苏省电力有限公司泰州供电分公司 | Unmanned aerial vehicle flight control method and device for power inspection |
CN114882024A (en) * | 2022-07-07 | 2022-08-09 | 深圳市信润富联数字科技有限公司 | Target object defect detection method and device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN112598668B (en) | 2021-06-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3505869B1 (en) | Method, apparatus, and computer readable storage medium for updating electronic map | |
CN109284801B (en) | Traffic indicator lamp state identification method and device, electronic equipment and storage medium | |
CN112712023B (en) | Vehicle type recognition method and system and electronic equipment | |
WO2020039937A1 (en) | Position coordinates estimation device, position coordinates estimation method, and program | |
CN112598668B (en) | Defect identification method and device based on three-dimensional image and electronic equipment | |
EP3706096A1 (en) | People-gathering analysis device, movement destination prediction creation device, people-gathering analysis system, vehicle, and people-gathering analysis program | |
CN111319560B (en) | Information processing system, program, and information processing method | |
CN112859109B (en) | Unmanned aerial vehicle panoramic image processing method and device and electronic equipment | |
CN109300322B (en) | Guideline drawing method, apparatus, device, and medium | |
CN115240154A (en) | Method, device, equipment and medium for extracting point cloud features of parking lot | |
US11461944B2 (en) | Region clipping method and recording medium storing region clipping program | |
CN112639822B (en) | Data processing method and device | |
CN112857254B (en) | Parameter measurement method and device based on unmanned aerial vehicle data and electronic equipment | |
CN114820777B (en) | Unmanned aerial vehicle three-dimensional data front-end processing method and device and unmanned aerial vehicle | |
CN112005275B (en) | System and method for point cloud rendering using video memory pool | |
CN112069899A (en) | Road shoulder detection method and device and storage medium | |
CN111383337B (en) | Method and device for identifying objects | |
CN113984109B (en) | Track detection data correction method and device and electronic equipment | |
JP7232727B2 (en) | Map data management device and map data management method | |
JP2020193956A5 (en) | ||
CN109145908A (en) | Vehicle positioning method, system, device, test equipment and storage medium | |
CN113962107A (en) | Method and device for simulating driving road section, electronic equipment and storage medium | |
CN111709354B (en) | Method and device for identifying target area, electronic equipment and road side equipment | |
CN115588180A (en) | Map generation method, map generation device, electronic apparatus, map generation medium, and program product | |
CN114136327A (en) | Automatic inspection method and system for recall ratio of dotted line segment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CP01 | Change in the name or title of a patent holder |
Address after: 102206 room 503, building 6, No.97, Changping Road, Changping District, Beijing Patentee after: Beijing Dacheng Guoce Technology Co.,Ltd. Address before: 102206 room 503, building 6, No.97, Changping Road, Changping District, Beijing Patentee before: BEIJING DACHENG GUOCE SCIENCE AND TECHNOLOGY CO.,LTD. |
|
CP01 | Change in the name or title of a patent holder |