CN115096268B - Bridge damage depth detection method based on unmanned aerial vehicle aerial photography and ultrasonic detection - Google Patents
Bridge damage depth detection method based on unmanned aerial vehicle aerial photography and ultrasonic detection Download PDFInfo
- Publication number
- CN115096268B CN115096268B CN202210691357.3A CN202210691357A CN115096268B CN 115096268 B CN115096268 B CN 115096268B CN 202210691357 A CN202210691357 A CN 202210691357A CN 115096268 B CN115096268 B CN 115096268B
- Authority
- CN
- China
- Prior art keywords
- bridge
- damage
- aerial vehicle
- unmanned aerial
- bridge damage
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C11/00—Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
- G01C11/02—Picture taking arrangements specially adapted for photogrammetry or photographic surveying, e.g. controlling overlapping of pictures
- G01C11/025—Picture taking arrangements specially adapted for photogrammetry or photographic surveying, e.g. controlling overlapping of pictures by scanning the object
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B64—AIRCRAFT; AVIATION; COSMONAUTICS
- B64C—AEROPLANES; HELICOPTERS
- B64C39/00—Aircraft not otherwise provided for
- B64C39/02—Aircraft not otherwise provided for characterised by special use
- B64C39/028—Micro-sized aircraft
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C11/00—Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
- G01C11/36—Videogrammetry, i.e. electronic processing of video signals from a single source or from different sources to give parallax or range information
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N29/00—Investigating or analysing materials by the use of ultrasonic, sonic or infrasonic waves; Visualisation of the interior of objects by transmitting ultrasonic or sonic waves through the object
- G01N29/04—Analysing solids
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N29/00—Investigating or analysing materials by the use of ultrasonic, sonic or infrasonic waves; Visualisation of the interior of objects by transmitting ultrasonic or sonic waves through the object
- G01N29/44—Processing the detected response signal, e.g. electronic circuits specially adapted therefor
- G01N29/4409—Processing the detected response signal, e.g. electronic circuits specially adapted therefor by comparison
- G01N29/4427—Processing the detected response signal, e.g. electronic circuits specially adapted therefor by comparison with stored values, e.g. threshold values
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N29/00—Investigating or analysing materials by the use of ultrasonic, sonic or infrasonic waves; Visualisation of the interior of objects by transmitting ultrasonic or sonic waves through the object
- G01N29/44—Processing the detected response signal, e.g. electronic circuits specially adapted therefor
- G01N29/4409—Processing the detected response signal, e.g. electronic circuits specially adapted therefor by comparison
- G01N29/4436—Processing the detected response signal, e.g. electronic circuits specially adapted therefor by comparison with a reference signal
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
- G06T7/001—Industrial image inspection using an image reference approach
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/17—Terrestrial scenes taken from planes or by drones
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N2291/00—Indexing codes associated with group G01N29/00
- G01N2291/02—Indexing codes associated with the analysed material
- G01N2291/023—Solids
- G01N2291/0232—Glass, ceramics, concrete or stone
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N2291/00—Indexing codes associated with group G01N29/00
- G01N2291/02—Indexing codes associated with the analysed material
- G01N2291/028—Material parameters
- G01N2291/0289—Internal structure, e.g. defects, grain size, texture
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30132—Masonry; Concrete
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30181—Earth observation
- G06T2207/30184—Infrastructure
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Chemical & Material Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Analytical Chemistry (AREA)
- Biochemistry (AREA)
- Medical Informatics (AREA)
- Immunology (AREA)
- Pathology (AREA)
- Evolutionary Computation (AREA)
- Signal Processing (AREA)
- Remote Sensing (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Radar, Positioning & Navigation (AREA)
- Aviation & Aerospace Engineering (AREA)
- Acoustics & Sound (AREA)
- Quality & Reliability (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Investigating Or Analyzing Materials By The Use Of Ultrasonic Waves (AREA)
Abstract
The invention relates to the technical field of video image processing, in particular to a bridge damage depth detection method based on unmanned aerial vehicle aerial photography and ultrasonic detection; according to the invention, an unmanned aerial vehicle is used for shooting an image by a camera, the image is processed, a bridge damage characteristic image in the image is extracted, whether the bridge damage characteristic image is true or not is judged based on ultrasonic detection information generated by an ultrasonic detector, after the bridge damage characteristic image is judged to be true, the bridge damage characteristic image is compared with a bridge design image, whether the bridge damage characteristic image is true or not is determined, and if the bridge damage characteristic image is true, bridge damage information is output; thereby realizing automatic detection of bridge damage; and whether the bridge damage characteristic diagram is true is judged through the ultrasonic detector and the bridge design diagram, so that the situation of misjudgment is avoided, and the detection accuracy is improved.
Description
Technical Field
The invention relates to the technical field of video image processing, in particular to a bridge damage depth detection method based on unmanned aerial vehicle aerial photography and ultrasonic detection.
Background
The bridge is an important component part of highways and railways, is also an important basic implementation of economic and social development, can generate various damages in the long-term working process of the bridge, and brings important potential hazards to the safety of the bridge. In order to discover and remove damage to the bridge in time and grasp the safety state of bridge members, the bridge needs to be maintained regularly. In the maintenance process, the damage information of the bridge is acquired through various bridge detection and general investigation, the bridge condition is analyzed, the problems are found in time, and a maintenance scheme is formulated, so that the method has important significance for the safe operation of the bridge.
The existing bridge defect detection method mainly utilizes on-site manual detection or detection equipment such as a beam detection vehicle to carry personnel close to the bridge, and the defects of the bridge are visually detected by human eyes. Because the lower part of the bridge is often a channel, a railway, a highway and other traffic roads, the bridge crossing the roads is often a main span; and the bottom of the bridge is extremely easy to generate concrete cracking, steel rust, concrete peeling and other diseases due to long-term load action, and how to conveniently detect the diseases at the bottom of the bridge is always a difficult problem for bridge detection.
Because the bottom of the bridge member often has no natural working plane, the traditional detection method mostly adopts a large-scale bridge detection vehicle to manually photograph and detect or a spliced upright rod is erected with a camera to photograph so as to obtain the image of the bottom of the bridge, and the safety of the bridge is detected. However, the traditional method has high detection cost, low efficiency and low automation degree.
Disclosure of Invention
In order to solve the problems in the prior art, the invention aims to provide a bridge damage depth detection method based on unmanned aerial vehicle aerial photography and ultrasonic detection, and aims to solve the technical problems of high detection cost and low efficiency at present.
A bridge damage depth detection method based on unmanned aerial vehicle aerial photography and ultrasonic detection comprises the following steps:
step 1: determining the environment of a bridge to be detected, determining the inspection path of the unmanned aerial vehicle, the camera position of the camera and the camera position of the ultrasonic detector based on the determined environment, and ensuring that the axes of the output ends of the camera and the ultrasonic detector are parallel;
step 2: shooting and detecting the bridge on the determined inspection path through a camera and an ultrasonic detector which are mounted on the unmanned aerial vehicle;
step 3: preprocessing video data returned by a camera to obtain a picture data set to be detected;
step 4: performing feature extraction on the picture to be detected in the picture data set by adopting the trained bridge damage detection model to obtain a bridge damage feature map;
step 5: acquiring a picture to be detected corresponding to the bridge damage characteristic diagram, and acquiring shooting time and unmanned aerial vehicle positioning information corresponding to the shooting time based on the picture to be detected;
step 6: acquiring detection information of a corresponding ultrasonic detector based on the shooting time and the unmanned aerial vehicle positioning information obtained in the step 5; judging whether the bridge damage feature map is true or not based on the detection information of the ultrasonic detector under the same shooting time and the same unmanned plane positioning information, if true, executing the step 7, and if false, storing the information into a database;
step 7: constructing a bridge damage depth map based on detection information of the ultrasonic detector; and obtaining a corresponding bridge design diagram and a bridge damage depth diagram for comparison, judging whether the bridge design diagram is true, outputting the bridge depth diagram if the bridge design diagram is true, and storing the information into a database if the bridge design diagram is false.
According to the invention, the unmanned aerial vehicle is used for carrying the camera to shoot the field video of the bridge, converting the video into the picture, and extracting the characteristics of the bridge through the bridge damage detection model to obtain the bridge damage characteristic diagram, wherein the shot picture exists in the field video because the bridge damage characteristic diagram corresponds to the shot picture, so that the specific shooting time can be known, the current position of the unmanned aerial vehicle can be obtained according to the shooting time, and the position of the real bridge corresponding to the picture with the bridge damage characteristic diagram can be obtained based on the position information; thereby realizing automatic detection of bridge damage;
because the image recognition mode is likely to have misjudgment, the method adds the ultrasonic detector to detect, and only needs to find the detection information of the ultrasonic detector at the corresponding moment to judge whether the bridge damage characteristic image is true or not.
However, by means of ultrasonic detection, the design of the bridge itself may be the same, and thus, misjudgment may also exist, so that the original bridge design diagram is added for comparison, thereby further avoiding the possibility of misjudgment.
In addition, the invention can judge whether the final detection result is correct or not based on the steps executed by each unmanned aerial vehicle by arranging a plurality of unmanned aerial vehicles and sequentially executing the steps by each unmanned aerial vehicle, thereby further ensuring the accuracy of the detection result.
Preferably, the step 1 includes the steps of:
step 1.1: determining the environment of a bridge to be detected, and establishing a space bounding box of a three-dimensional model based on the determined environment;
step 1.2: determining unmanned aerial vehicle measuring points based on the established space bounding box of the three-dimensional model;
step 1.3: planning a measurement path of the unmanned aerial vehicle based on the determined unmanned aerial vehicle measurement points;
step 1.4: and adjusting the inspection path of the unmanned aerial vehicle, the camera position of the camera and the camera position of the ultrasonic detector based on the planned measurement path of the unmanned aerial vehicle, and ensuring that the axes of the output ends of the camera and the ultrasonic detector are parallel.
Preferably, the step 3 includes the steps of:
step 3.1: converting video data returned by the unmanned aerial vehicle into initial pictures at intervals of 0.1 second to obtain an initial picture data set;
step 3.2: and preprocessing the initial picture in the initial picture data set to obtain a picture data set to be detected.
Preferably, the bridge damage detection model is trained based on a deep learning algorithm, the difference analysis is carried out on the picture to be detected and the standard picture, and a bridge damage characteristic diagram with a difference contour in the picture to be detected is extracted;
the training steps of the bridge damage detection model are as follows:
building a bridge damage model based on the existing bridge damage map and a standard bridge map;
dividing the existing bridge damage map into a training set and a testing set;
training the bridge damage model based on the training set to obtain a judgment threshold; extracting a corresponding bridge damage characteristic diagram when the judging threshold value is exceeded;
and testing the bridge damage model based on the test set and the judgment threshold value, and completing the establishment of the bridge damage model after the test is completed.
Preferably, the step 6 includes the steps of:
step 6.1: acquiring detection information of a corresponding ultrasonic detector based on the shooting time and the unmanned aerial vehicle positioning information obtained in the step 5;
step 6.2: constructing a three-dimensional model based on the obtained ultrasonic detection information;
step 6.3: setting a depth threshold of ultrasonic detection information, determining a depth value of a detection surface based on the constructed three-dimensional model, comparing the depth value with the depth threshold, and outputting abnormal coordinates when the depth value exceeds the depth threshold;
step 6.4: and (3) judging whether the abnormal coordinates determined in the step (6.3) are consistent with the coordinates of the bridge damage characteristic diagram, if so, judging whether the bridge damage characteristic diagram is true, if true, executing the step (7), and if false, storing the information into a database.
Further, the invention also comprises the following steps:
step 8: establishing a bridge damage degree identification model based on the original bridge data, the bridge damage data and the deep learning;
step 9: and inputting the data of the bridge to be detected and the damage depth of the bridge into a trained bridge damage degree identification model for calculation, so as to obtain the damage degree of the bridge.
According to the method, the damage degree of the bridge to be detected is evaluated by establishing the bridge damage degree identification model, so that the damage degree of the bridge to be detected is ascertained, and decision guiding is provided for subsequent maintenance work.
Preferably, each piece of original bridge data and the corresponding bridge damage data are constructed into data packets, and the data packets are divided into a training set and a testing set; the training set and the testing set are used for training and testing the bridge damage degree identification model.
The beneficial effects of the invention include:
according to the invention, the unmanned aerial vehicle is used for carrying the camera to shoot the field video of the bridge, converting the video into the picture, and extracting the characteristics of the bridge through the bridge damage detection model to obtain the bridge damage characteristic diagram, wherein the shot picture exists in the field video because the bridge damage characteristic diagram corresponds to the shot picture, so that the specific shooting time can be known, the current position of the unmanned aerial vehicle can be obtained according to the shooting time, and the position of the real bridge corresponding to the picture with the bridge damage characteristic diagram can be obtained based on the position information; thereby realizing automatic detection of bridge damage;
because the image recognition mode is likely to have misjudgment, the method adds the ultrasonic detector to detect, and only needs to find the detection information of the ultrasonic detector at the corresponding moment to judge whether the bridge damage characteristic image is true or not.
However, by means of ultrasonic detection, the design of the bridge itself may be the same, and thus, misjudgment may also exist, so that the original bridge design diagram is added for comparison, thereby further avoiding the possibility of misjudgment.
According to the method, the damage degree of the bridge to be detected is evaluated by establishing the bridge damage degree identification model, so that the damage degree of the bridge to be detected is ascertained, and decision guiding is provided for subsequent maintenance work.
Drawings
FIG. 1 is a schematic flow chart of the present invention.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, but not all embodiments. The components of the embodiments of the present application, which are generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, as provided in the accompanying drawings, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present application without making any inventive effort, are intended to be within the scope of the present application.
Embodiments of the present invention are described in further detail below with reference to FIG. 1:
a bridge damage depth detection method based on unmanned aerial vehicle aerial photography and ultrasonic detection comprises the following steps:
step 1: determining the environment of a bridge to be detected, determining the inspection path of the unmanned aerial vehicle, the camera position of the camera and the camera position of the ultrasonic detector based on the determined environment, and ensuring that the axes of the output ends of the camera and the ultrasonic detector are parallel;
step 1.1: determining the environment of a bridge to be detected, and establishing a space bounding box of a three-dimensional model based on the determined environment;
step 1.2: determining unmanned aerial vehicle measuring points based on the established space bounding box of the three-dimensional model;
step 1.3: planning a measurement path of the unmanned aerial vehicle based on the determined unmanned aerial vehicle measurement points;
step 1.4: and adjusting the inspection path of the unmanned aerial vehicle, the camera position of the camera and the camera position of the ultrasonic detector based on the planned measurement path of the unmanned aerial vehicle, and ensuring that the axes of the output ends of the camera and the ultrasonic detector are parallel.
Step 2: shooting and detecting the bridge on the determined inspection path through a camera and an ultrasonic detector which are mounted on the unmanned aerial vehicle;
step 3: preprocessing video data returned by a camera to obtain a picture data set to be detected;
step 3.1: converting video data returned by the unmanned aerial vehicle into initial pictures at intervals of 0.1 second to obtain an initial picture data set;
step 3.2: and preprocessing the initial picture in the initial picture data set to obtain a picture data set to be detected.
Step 4: performing feature extraction on the picture to be detected in the picture data set by adopting the trained bridge damage detection model to obtain a bridge damage feature map;
training the bridge damage detection model based on a deep learning algorithm, performing differential analysis on a picture to be detected and a standard picture, and extracting a bridge damage characteristic diagram with differential contours in the picture to be detected;
the training steps of the bridge damage detection model are as follows:
building a bridge damage model based on the existing bridge damage map and a standard bridge map;
dividing the existing bridge damage map into a training set and a testing set;
training the bridge damage model based on the training set to obtain a judgment threshold; extracting a corresponding bridge damage characteristic diagram when the judging threshold value is exceeded;
and testing the bridge damage model based on the test set and the judgment threshold value, and completing the establishment of the bridge damage model after the test is completed.
Step 5: acquiring a picture to be detected corresponding to the bridge damage characteristic diagram, and acquiring shooting time and unmanned aerial vehicle positioning information corresponding to the shooting time based on the picture to be detected;
step 6: acquiring detection information of a corresponding ultrasonic detector based on the shooting time and the unmanned aerial vehicle positioning information obtained in the step 5; judging whether the bridge damage feature map is true or not based on the detection information of the ultrasonic detector under the same shooting time and the same unmanned plane positioning information, if true, executing the step 7, and if false, storing the information into a database;
step 6.1: acquiring detection information of a corresponding ultrasonic detector based on the shooting time and the unmanned aerial vehicle positioning information obtained in the step 5;
step 6.2: constructing a three-dimensional model based on the obtained ultrasonic detection information;
step 6.3: setting a depth threshold of ultrasonic detection information, determining a depth value of a detection surface based on the constructed three-dimensional model, comparing the depth value with the depth threshold, and outputting abnormal coordinates when the depth value exceeds the depth threshold;
step 6.4: and (3) judging whether the abnormal coordinates determined in the step (6.3) are consistent with the coordinates of the bridge damage characteristic diagram, if so, judging whether the bridge damage characteristic diagram is true, if true, executing the step (7), and if false, storing the information into a database.
Step 7: constructing a bridge damage depth map based on detection information of the ultrasonic detector; and obtaining a corresponding bridge design diagram and a bridge damage depth diagram for comparison, judging whether the bridge design diagram is true, outputting the bridge depth diagram if the bridge design diagram is true, and storing the information into a database if the bridge design diagram is false.
Step 8: establishing a bridge damage degree identification model based on the original bridge data, the bridge damage data and the deep learning;
the original bridge data is the data of the existing bridge, wherein the original bridge data comprises necessary data for building the bridge; for example: bridge length, bridge width, bridge bearing capacity, bridge height, number of spandrel girders, etc.
Step 9: and inputting the data of the bridge to be detected and the damage depth of the bridge into a trained bridge damage degree identification model for calculation, so as to obtain the damage degree of the bridge.
According to the method, the damage degree of the bridge to be detected is evaluated by establishing the bridge damage degree identification model, so that the damage degree of the bridge to be detected is ascertained, and decision guiding is provided for subsequent maintenance work.
Constructing each piece of original bridge data and the corresponding bridge damage data into data packets, and dividing the data packets into a training set and a testing set; the training set and the testing set are used for training and testing the bridge damage degree identification model.
According to the invention, the unmanned aerial vehicle is used for carrying the camera to shoot the field video of the bridge, converting the video into the picture, and extracting the characteristics of the bridge through the bridge damage detection model to obtain the bridge damage characteristic diagram, wherein the shot picture exists in the field video because the bridge damage characteristic diagram corresponds to the shot picture, so that the specific shooting time can be known, the current position of the unmanned aerial vehicle can be obtained according to the shooting time, and the position of the real bridge corresponding to the picture with the bridge damage characteristic diagram can be obtained based on the position information; thereby realizing automatic detection of bridge damage;
because the image recognition mode is likely to have misjudgment, the method adds the ultrasonic detector to detect, and only needs to find the detection information of the ultrasonic detector at the corresponding moment to judge whether the bridge damage characteristic image is true or not.
However, by means of ultrasonic detection, the design of the bridge itself may be the same, and thus, misjudgment may also exist, so that the original bridge design diagram is added for comparison, thereby further avoiding the possibility of misjudgment.
When the method is used, firstly, a path is planned, after the planning is completed, the unmanned aerial vehicle is controlled to fly to a preset position, then, based on the distance between the detection head of the ultrasonic detector and the detection surface, the distance between the detection head and the detection surface is adjusted, so that the optimal detection distance is achieved, and after the distance is determined, the unmanned aerial vehicle is controlled to carry out flight detection according to a planned line.
The foregoing examples merely represent specific embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the present application. It should be noted that, for those skilled in the art, several variations and modifications can be made without departing from the technical solution of the present application, which fall within the protection scope of the present application.
Claims (7)
1. The bridge damage depth detection method based on unmanned aerial vehicle aerial photography and ultrasonic detection is characterized by comprising the following steps of:
step 1: determining the environment of a bridge to be detected, determining the inspection path of the unmanned aerial vehicle, the camera position of the camera and the camera position of the ultrasonic detector based on the determined environment, and ensuring that the axes of the output ends of the camera and the ultrasonic detector are parallel;
step 2: shooting and detecting the bridge on the determined inspection path through a camera and an ultrasonic detector which are mounted on the unmanned aerial vehicle;
step 3: preprocessing video data returned by a camera to obtain a picture data set to be detected;
step 4: performing feature extraction on the picture to be detected in the picture data set by adopting the trained bridge damage detection model to obtain a bridge damage feature map;
step 5: acquiring a picture to be detected corresponding to the bridge damage characteristic diagram, and acquiring shooting time and unmanned aerial vehicle positioning information corresponding to the shooting time based on the picture to be detected;
step 6: acquiring detection information of a corresponding ultrasonic detector based on the shooting time and the unmanned aerial vehicle positioning information obtained in the step 5; judging whether the bridge damage feature map is true or not based on the detection information of the ultrasonic detector under the same shooting time and the same unmanned plane positioning information, if true, executing the step 7, and if false, storing the information into a database;
step 7: constructing a bridge damage depth map based on detection information of the ultrasonic detector; and obtaining a corresponding bridge design diagram and a bridge damage depth diagram for comparison, judging whether the bridge design diagram is true, outputting the bridge damage depth diagram if the bridge design diagram is true, and storing the information into a database if the bridge design diagram is false.
2. The method for detecting the depth of the damage to the bridge based on unmanned aerial vehicle aerial photographing and ultrasonic detection according to claim 1, wherein the step 1 comprises the following steps:
step 1.1: determining the environment of a bridge to be detected, and establishing a space bounding box of a three-dimensional model based on the determined environment;
step 1.2: determining unmanned aerial vehicle measuring points based on the established space bounding box of the three-dimensional model;
step 1.3: planning a measurement path of the unmanned aerial vehicle based on the determined unmanned aerial vehicle measurement points;
step 1.4: and adjusting the inspection path of the unmanned aerial vehicle, the camera position of the camera and the camera position of the ultrasonic detector based on the planned measurement path of the unmanned aerial vehicle, and ensuring that the axes of the output ends of the camera and the ultrasonic detector are parallel.
3. The method for detecting the depth of the damage to the bridge based on unmanned aerial vehicle aerial photographing and ultrasonic detection according to claim 1, wherein the step 3 comprises the following steps:
step 3.1: converting video data returned by the unmanned aerial vehicle into initial pictures at intervals of 0.1 second to obtain an initial picture data set;
step 3.2: and preprocessing the initial picture in the initial picture data set to obtain a picture data set to be detected.
4. The bridge damage depth detection method based on unmanned aerial vehicle aerial photographing and ultrasonic detection according to claim 1, wherein the bridge damage detection model is trained based on a deep learning algorithm, a picture to be detected and a standard picture are subjected to differential analysis, and a bridge damage feature map with differential contours in the picture to be detected is extracted;
the training steps of the bridge damage detection model are as follows:
building a bridge damage model based on the existing bridge damage map and a standard bridge map;
dividing the existing bridge damage map into a training set and a testing set;
training the bridge damage model based on the training set to obtain a judgment threshold; extracting a corresponding bridge damage characteristic diagram when the judging threshold value is exceeded;
and testing the bridge damage model based on the test set and the judgment threshold value, and completing the establishment of the bridge damage model after the test is completed.
5. The method for detecting the depth of the damage to the bridge based on the aerial photographing and the ultrasonic detection of the unmanned aerial vehicle according to claim 1, wherein the step 6 comprises the following steps:
step 6.1: acquiring detection information of a corresponding ultrasonic detector based on the shooting time and the unmanned aerial vehicle positioning information obtained in the step 5;
step 6.2: constructing a three-dimensional model based on the obtained ultrasonic detection information;
step 6.3: setting a depth threshold of ultrasonic detection information, determining a depth value of a detection surface based on the constructed three-dimensional model, comparing the depth value with the depth threshold, and outputting abnormal coordinates when the depth value exceeds the depth threshold;
step 6.4: and (3) judging whether the abnormal coordinates determined in the step (6.3) are consistent with the coordinates of the bridge damage characteristic diagram, if so, judging whether the bridge damage characteristic diagram is true, if true, executing the step (7), and if false, storing the information into a database.
6. The method for detecting the depth of the damage to the bridge based on unmanned aerial vehicle aerial photographing and ultrasonic detection according to claim 1, further comprising the following steps:
step 8: establishing a bridge damage degree identification model based on the original bridge data, the bridge damage data and the deep learning;
step 9: and inputting the data of the bridge to be detected and the damage depth of the bridge into a trained bridge damage degree identification model for calculation, so as to obtain the damage degree of the bridge.
7. The method for detecting the bridge damage depth based on unmanned aerial vehicle aerial photographing and ultrasonic detection according to claim 6, wherein each piece of original bridge data and the corresponding piece of bridge damage data are constructed into data packets, and the data packets are divided into a training set and a testing set; the training set and the testing set are used for training and testing the bridge damage degree identification model.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210691357.3A CN115096268B (en) | 2022-06-17 | 2022-06-17 | Bridge damage depth detection method based on unmanned aerial vehicle aerial photography and ultrasonic detection |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210691357.3A CN115096268B (en) | 2022-06-17 | 2022-06-17 | Bridge damage depth detection method based on unmanned aerial vehicle aerial photography and ultrasonic detection |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115096268A CN115096268A (en) | 2022-09-23 |
CN115096268B true CN115096268B (en) | 2023-06-30 |
Family
ID=83290032
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210691357.3A Active CN115096268B (en) | 2022-06-17 | 2022-06-17 | Bridge damage depth detection method based on unmanned aerial vehicle aerial photography and ultrasonic detection |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115096268B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117036997A (en) * | 2023-08-01 | 2023-11-10 | 中交第三公路工程局有限公司 | Steel box girder damage identification method |
CN118134929B (en) * | 2024-05-08 | 2024-06-28 | 深圳市深水水务咨询有限公司 | Bridge engineering anomaly detection method based on unmanned aerial vehicle image acquisition |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113928563A (en) * | 2021-11-19 | 2022-01-14 | 武汉珈鹰智能科技有限公司 | A unmanned aerial vehicle for detecting bridge bottom |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106774384A (en) * | 2016-12-05 | 2017-05-31 | 王源浩 | A kind of bridge machinery intelligent barrier avoiding robot |
CN107066995A (en) * | 2017-05-25 | 2017-08-18 | 中国矿业大学 | A kind of remote sensing images Bridges Detection based on convolutional neural networks |
CN111340772A (en) * | 2020-02-24 | 2020-06-26 | 湖北工业大学 | Reinforced concrete bridge damage detection system and method based on mobile terminal |
CN111638220A (en) * | 2020-06-30 | 2020-09-08 | 南京林业大学 | Bridge crack detection system of interactive intelligent unmanned aerial vehicle group based on 5G |
CN112098326B (en) * | 2020-08-20 | 2022-09-30 | 东南大学 | Automatic detection method and system for bridge diseases |
CN112819766A (en) * | 2021-01-25 | 2021-05-18 | 武汉理工大学 | Bridge defect overhauling method, device, system and storage medium |
-
2022
- 2022-06-17 CN CN202210691357.3A patent/CN115096268B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113928563A (en) * | 2021-11-19 | 2022-01-14 | 武汉珈鹰智能科技有限公司 | A unmanned aerial vehicle for detecting bridge bottom |
Non-Patent Citations (1)
Title |
---|
无人机在路桥病害检测中的设计与实现;陈显龙;陈晓龙;赵成;贺志刚;;测绘通报(第04期) * |
Also Published As
Publication number | Publication date |
---|---|
CN115096268A (en) | 2022-09-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN115096268B (en) | Bridge damage depth detection method based on unmanned aerial vehicle aerial photography and ultrasonic detection | |
CN106919978B (en) | Method for identifying and detecting parts of high-speed rail contact net supporting device | |
CN110458798B (en) | Vibration damper defect visual detection method, system and medium based on key point detection | |
Akagic et al. | Pothole detection: An efficient vision based method using rgb color space image segmentation | |
Yang et al. | Deep learning‐based bolt loosening detection for wind turbine towers | |
CN111797890A (en) | Method and system for detecting defects of power transmission line equipment | |
WO2021056630A1 (en) | Defect detection method and device for transmission line tower structure | |
CN107392247A (en) | Atural object safe distance real-time detection method below a kind of power line | |
KR102346676B1 (en) | Method for creating damage figure using the deep learning-based damage image classification of facility | |
CN108846331B (en) | Video identification method for judging whether screw fastener of motor train unit chassis falls off or not | |
CN111259770B (en) | Unmanned plane platform and deep learning-based cable force rapid test method under complex background | |
CN110852001A (en) | Bridge structure safety assessment method based on image processing | |
CN114812403A (en) | Large-span steel structure hoisting deformation monitoring method based on unmanned aerial vehicle and machine vision | |
CN116385485B (en) | Video tracking method and system for long-strip-shaped tower crane object | |
KR102311558B1 (en) | System and method for detecting structure damage using artificial intelligence, and a recording medium having computer readable program for executing the method | |
CN116740833A (en) | Line inspection and card punching method based on unmanned aerial vehicle | |
Murao et al. | Concrete crack detection using uav and deep learning | |
CN116597329A (en) | Bridge crack detection system and method based on machine vision | |
CN115511836A (en) | Bridge crack grade evaluation method and system based on reinforcement learning algorithm | |
CN112541455A (en) | Machine vision-based method for predicting accident of pole breakage of concrete pole of distribution network | |
CN112184903A (en) | Method, device, equipment and medium for detecting high-voltage line tree obstacle risk points | |
CN117498225A (en) | Unmanned aerial vehicle intelligent power line inspection system | |
CN117010673A (en) | Method and system for identifying characterization damage of structural member of wood structure historical building | |
CN117351025A (en) | Inspection and investigation method and system based on target segmentation and extraction | |
CN115187880A (en) | Communication optical cable defect detection method and system based on image recognition and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |