[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN110634138A - Bridge deformation monitoring method, device and equipment based on visual perception - Google Patents

Bridge deformation monitoring method, device and equipment based on visual perception Download PDF

Info

Publication number
CN110634138A
CN110634138A CN201910918209.9A CN201910918209A CN110634138A CN 110634138 A CN110634138 A CN 110634138A CN 201910918209 A CN201910918209 A CN 201910918209A CN 110634138 A CN110634138 A CN 110634138A
Authority
CN
China
Prior art keywords
dimensional code
code pattern
target
image
bridge
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910918209.9A
Other languages
Chinese (zh)
Inventor
赵文一
陈宇轩
何显银
宋杰
董梅
胡辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Ruhr Technology Co Ltd
Original Assignee
Hangzhou Ruhr Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Ruhr Technology Co Ltd filed Critical Hangzhou Ruhr Technology Co Ltd
Priority to CN201910918209.9A priority Critical patent/CN110634138A/en
Publication of CN110634138A publication Critical patent/CN110634138A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a method, a device and equipment for monitoring bridge displacement based on visual perception. The method comprises the following steps: acquiring a target image, wherein the target image comprises a rectangular two-dimensional code pattern; identifying characteristic points of the two-dimensional code pattern; and determining the displacement of the bridge according to the coordinates of the characteristic points. According to the technical scheme of the embodiment of the invention, the displacement of the bridge is determined by identifying the characteristic points of the target of the two-dimensional code pattern, so that the real-time monitoring of the bridge displacement is realized, the bridge displacement monitoring precision is high, the anti-noise capability is strong, and the provided monitoring method is wide in adaptability and not easy to be limited by the environment.

Description

Bridge deformation monitoring method, device and equipment based on visual perception
Technical Field
The embodiment of the invention relates to the technical field of image processing, in particular to a method, a device and equipment for monitoring bridge deformation based on visual perception.
Background
With the increasing development of traffic, bridges have become important components of traffic infrastructure, and health monitoring of bridges has become increasingly important. The bridge deflection or bridge displacement can directly reflect the health condition of the bridge and is also an important parameter for representing the rigidity and the load capacity of the bridge structure. Thus, bridge displacement is an important parameter for bridge health monitoring.
The existing bridge displacement algorithm mainly monitors the displacement of the bridge in a short distance. Due to the fact that the actual bridge project site environment is very complex, the bridge structure is large, close-range installation and measurement of monitoring equipment are difficult to guarantee, and long-distance long-term monitoring means are often needed. In addition, the conventional bridge displacement monitoring system adopts an artificial target, mainly adopts a single characteristic point or characteristic line, is easily limited by a scene, has low recognition precision and is easily influenced by noise.
Disclosure of Invention
The invention provides a bridge displacement monitoring method, a bridge displacement monitoring device and bridge displacement monitoring equipment based on visual perception, which are used for realizing real-time monitoring of bridge displacement and are simultaneously suitable for complex scenes and high in identification precision.
In a first aspect, an embodiment of the present invention provides a bridge displacement monitoring method based on visual perception, where the method includes:
acquiring a target image, wherein the target image comprises a rectangular two-dimensional code pattern;
identifying characteristic points of the two-dimensional code pattern;
and determining the displacement of the bridge according to the coordinates of the characteristic points.
In a second aspect, an embodiment of the present invention further provides a device for monitoring bridge displacement based on visual perception, where the device includes:
the image acquisition module is used for acquiring a target image, wherein the target image comprises a rectangular two-dimensional code pattern;
the characteristic identification module is used for identifying characteristic points of the two-dimensional code pattern;
and the displacement determining module is used for determining the displacement of the bridge according to the coordinates of the characteristic points.
In a third aspect, an embodiment of the present invention further provides a system for monitoring bridge displacement based on visual perception, where the system includes:
the two-dimensional code target is arranged on a set plane of the bridge and is positioned on the same side of the bridge as the image acquisition module, wherein the two-dimensional code target comprises a two-dimensional code pattern;
the image acquisition module is used for acquiring a target image of the two-dimensional code target and sending the target image to the image processing module;
and the image processing module is used for receiving the target image, identifying the characteristic points of the target image and determining the displacement of the bridge according to the coordinates of the characteristic points.
In a fourth aspect, an embodiment of the present invention further provides an apparatus, where the apparatus includes:
one or more processors;
a memory for storing one or more programs;
when the one or more programs are executed by the one or more processors, the one or more processors implement the method for monitoring bridge displacement provided by any embodiment of the invention.
According to the technical scheme of the embodiment of the invention, the target image containing the two-dimension code pattern is obtained, the characteristic point of the two-dimension code group is identified, and the displacement of the bridge is determined according to the coordinate information of the characteristic point, so that the real-time monitoring of the bridge displacement is realized, the monitoring method is not easily influenced by the environment, the anti-noise capability is strong, the displacement is monitored through the characteristic point of the two-dimension code pattern, the monitoring precision is high, and the application range is wide.
Drawings
Fig. 1 is a flowchart of a method for monitoring bridge displacement based on visual perception according to a first embodiment of the present invention;
FIG. 2 is a flowchart of a method for monitoring bridge displacement based on visual perception according to a second embodiment of the present invention;
FIG. 3 is a flowchart of a method for monitoring bridge displacement based on visual perception according to a third embodiment of the present invention;
FIG. 4 is a schematic view of a bridge displacement monitoring device based on visual perception according to a fourth embodiment of the present invention;
FIG. 5 is a schematic view of a monitoring system for bridge displacement based on visual perception according to a fifth embodiment of the present invention;
fig. 6 is a schematic diagram of an apparatus in a sixth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Example one
Fig. 1 is a flowchart of a method for monitoring bridge displacement based on visual perception according to an embodiment of the present invention, where the present embodiment is applicable to a bridge displacement monitoring situation, and the method may be executed by a bridge displacement monitoring device or system, where the device may be implemented in a software and hardware manner, and specifically includes the following steps:
step 110, obtaining a target image, wherein the target image comprises a rectangular two-dimensional code pattern.
The target image is an image of a target of the bridge, and may be an image captured by the target image capturing device in real time. The target image may be composed of two colors, black and white, or may be any set color, and the number of the colors is preferably as small as possible, and preferably two colors, so as to reduce the size of the space occupied by the image and increase the speed of image processing. The two-dimensional code pattern refers to a two-dimensional code mark drawn on the target.
Optionally, the two-dimensional code pattern may be two-dimensional code marks generated by using a set algorithm, each mark corresponds to a code, and the mark can be uniquely identified by the code.
Optionally, the two-dimensional code pattern is an ArUco mark.
Wherein, the Aruco mark is a two-dimensional code mark, and each mark corresponds to a code, and the code can uniquely identify the mark. Typically, the Aruco mark is surrounded by a set of black borders to speed up the monitoring of the mark in the image.
And 120, identifying characteristic points of the two-dimensional code pattern.
The feature points may be points set by a user, or default feature points, and the number of the feature points may be 2, 3, 4, or more. Specifically, the feature points may be four corner points of the two-dimensional pattern, and may further include a center point of the two-dimensional code pattern, or a point that can be quickly identified in the two-dimensional code pattern.
Optionally, identifying the feature points of the two-dimensional code pattern includes:
determining whether the two-dimension code pattern is a target two-dimension code pattern;
and after the two-dimension code pattern is determined to be the target two-dimension code pattern, identifying four corner points of the two-dimension code pattern as characteristic points.
Further, determining whether the two-dimensional code pattern is a target two-dimensional code pattern comprises:
and determining whether the two-dimensional code pattern is a target two-dimensional code pattern according to the coding information of the two-dimensional code pattern.
Specifically, four corners of the code pattern may be identified by a Harris corner detection algorithm.
For example, whether the coding information of the current two-dimensional code pattern is consistent with a preset code or not can be judged, and if so, the two-dimensional code pattern is determined to be a target two-dimensional code pattern; or the current two-dimensional code pattern can be rotated by 90 degrees, 180 degrees and 270 degrees clockwise, four pieces of coding information corresponding to each image after rotation and before rotation are obtained, whether coding information consistent with a preset code exists in the four pieces of coding information or not is judged, and if the coding information exists, the two-dimensional code pattern is determined to be the target two-dimensional code pattern.
And step 130, determining the displacement of the bridge according to the coordinates of the characteristic points.
The displacement of the bridge comprises displacement perpendicular to the direction of the bridge body, and also can comprise displacement of the bridge in the horizontal, vertical and longitudinal directions. Typically, bridge displacement may also be referred to as bridge deflection.
Specifically, the displacement of the bridge may be determined according to a change between the coordinates of the feature points and preset coordinates.
For example, the feature point may be a center point, a corner point, or another point having an identifying feature of the two-dimensional code pattern, an initial position of the feature point may be known, after the feature point is identified according to the current time, coordinates of the feature point in world coordinates are determined according to coordinate transformation, and displacement of the bridge is determined according to a variation between the coordinates of the feature point and the initial position at the current time.
According to the technical scheme of the embodiment of the invention, the target image containing the two-dimension code pattern is obtained, the characteristic point of the two-dimension code group is identified, and the displacement of the bridge is determined according to the coordinate information of the characteristic point, so that the real-time monitoring of the bridge displacement is realized, the monitoring method is not easily influenced by the environment, the anti-noise capability is strong, the displacement is monitored through the characteristic point of the two-dimension code pattern, the monitoring precision is high, and the application range is wide.
Example two
Fig. 2 is a flowchart of a method for monitoring bridge displacement based on visual perception according to a second embodiment of the present invention, which is a further supplement to and refinement of the previous embodiment, and the method further includes: performing image segmentation on the target image according to an adaptive threshold algorithm; and extracting a rectangular image containing a rectangular outline in the target image after image segmentation, and determining that the rectangular image is the two-dimensional code pattern.
As shown in fig. 2, the method for monitoring bridge displacement provided by this embodiment includes:
step 200, obtaining a target image, wherein the target image comprises a rectangular two-dimensional code pattern.
And step 210, carrying out image segmentation on the target image according to an adaptive threshold algorithm.
Because the equipment of shooting the target usually has certain distance with the target, consequently, often include other objects in the field of vision of shooting equipment, like the target that trees, pontic or other places set up, the target image of so needs to gathering carries out image segmentation to get rid of the interference of other objects to two-dimensional code pattern recognition, simultaneously, reduced the memory of image through segmenting for the speed of image processing, improved the efficiency of handling.
The image segmentation refers to a technology of extracting a two-dimensional code pattern region in a target image, the self-adaptive threshold algorithm is an iterative algorithm which utilizes a threshold of local features of the image to replace a global threshold, and the self-adaptive threshold algorithm has strong robustness to different illumination conditions, is slightly influenced by the environment and has high segmentation precision.
And step 220, extracting a rectangular image containing a rectangular outline in the target image after image segmentation, and determining that the rectangular image is the two-dimensional code pattern.
Since the two-dimensional code pattern in the target image is rectangular, and further is square, the two-dimensional code pattern can be further extracted by extracting the rectangle in the image.
Optionally, the extracting a rectangular image containing a rectangular contour in the target image after image segmentation includes:
and extracting a rectangular image containing a rectangular outline in the target image after the image segmentation by performing polygon approximation on the target image after the image segmentation.
Specifically, the target image after image segmentation can be subjected to polygon approximation according to a Douglas-Peucker algorithm.
And 230, converting the two-dimensional code pattern into a two-dimensional code pattern with a front view by calculating a homography matrix.
Because shooting equipment often has certain deflection angle when gathering the target image to lead to the target image of gathering to be non-front view, thereby influence bridge displacement monitoring's accuracy, consequently, need convert the two-dimensional code pattern into the front view earlier.
The homography matrix, also called homography matrix, is a perspective transformation matrix, and describes a mapping relation between points of the same plane and imaging at different angles, and can be used for angle correction of images.
The two-dimensional code pattern is converted into the front view through the homography matrix, so that the information of the two-dimensional code can be quickly identified, and the speed and the accuracy of extracting the pattern information of the two-dimensional code are accelerated.
And 240, performing binarization processing on the two-dimensional code pattern with the front view as the view direction.
The two-dimensional code pattern is generally a pattern including two or more colors, and preferably, may be a pattern including two colors, such as black and white. The binarization processing is to set the gray value of a pixel point in the two-dimensional code pattern to be 0 or 255, specifically, by setting a threshold, the gray value of the pixel point larger than the threshold is set to be 255, and the gray value of the pixel point smaller than the threshold is set to be 0. The data volume in the image is greatly reduced through binarization processing, and the information extraction speed is accelerated.
And 250, extracting the coding information of the two-dimensional code pattern after the binarization processing, and determining whether the two-dimensional code pattern is a target two-dimensional code pattern according to the coding information.
Specifically, the two-dimensional code pattern after binarization processing may be subjected to gridding according to a preset size or a template, or subjected to gridding by detecting a gray value of each pixel point. The coding of the black grid is set to 0 and the coding of the white grid is set to 1. The codes of all grids can be combined into coded information of a two-dimensional code pattern.
And step 260, after the two-dimensional code pattern is determined to be the target two-dimensional code pattern, identifying four corner points of the two-dimensional code pattern as feature points.
And 270, determining the pose of the device for shooting the target image according to the projection relation between the coordinates of the target image of the four corner points and the coordinates of the device for shooting the target image in the coordinate system.
The four corner points refer to an upper left corner vertex, an upper right corner vertex, a lower left corner vertex and a lower right corner vertex of the two-dimensional code pattern. The image coordinates are two-dimensional coordinates at which the target image is located.
And step 280, determining the central coordinate of the central point of the two-dimensional code pattern under a world coordinate system according to the pose.
For example, if the device for shooting the target image is a camera, determining the pose of the device for shooting the target image according to the target image coordinates of the four corner points and the projection relationship of the coordinates in the coordinate system of the device for shooting the target image, that is, calculating the pose of the camera through the feature points, specifically, obtaining the pose of the camera through the four corner points and four imaging points corresponding to the four corner points in the three-dimensional coordinate system of the camera through the projection relationship.
Optionally, determining a central coordinate of the central point of the two-dimensional code pattern in a world coordinate system according to the pose includes:
acquiring a conversion relation between a coordinate system of the device for shooting the target image and the world coordinate system;
and determining the central coordinate of the central point of the two-dimensional code pattern under a world coordinate system according to the conversion relation and the pose.
Specifically, after the pose of the device for shooting the target image is obtained, the conversion relation between the coordinate of the shooting device and the image coordinate can be determined; the conversion relation between the coordinate of the shooting device and the world coordinate can be determined according to the installation information of the shooting device; the conversion relation between the image coordinate and the world coordinate can be realized according to the two relations, the coordinate of any point of the two-dimensional code pattern in the world coordinate system is determined according to the conversion relation between the image coordinate and the world coordinate, the any point can be generally selected as a central point, the displacement of the central point can be obtained according to the coordinate of the central point in the world coordinate system and the original coordinate of the central point, and the bridge displacement can be determined according to the displacement of the central point.
And 290, filtering the central coordinate according to a Kalman filtering algorithm to update the central coordinate.
Because the special operating mode of bridge monitoring, shoot equipment usually, like the camera, the erection position is far away from the monitoring point, and to the bridge that the span is big, the working radius can reach more than 500 meters. When a camera is used for long-distance imaging, image jitter is easy to occur between adjacent frames, and high temperature at noon also causes image distortion, so that the coordinates of a central point need to be corrected through filtering.
For example, the motion of the central point on the bridge can be regarded as a straight reciprocating motion in the vertical direction, and a discrete linear system is used for modeling, and the system state equation is as follows:
X(k+1)=AX(k)+GW(k)
Z(k+1)=HX(k+1)+V(k+1)
wherein k is discrete time, x (k) represents a system state vector at the moment, a is a state transition matrix, z (k) represents an observed value of the system state at the moment k, H is an observation matrix, G is a noise coefficient matrix, w (k) is white noise, and v (k) is observation noise.
The mean values of W (k) and V (k) are defined as 0, and the covariance matrices are Q and R, respectively. And predicting the system state at the next moment through the state at the previous moment based on the state equation. Assuming that the current system state is k, according to the model of the system, the current state can be predicted based on the state of the system at the last time.
The kalman filter is expressed as follows:
(1) and (3) state one-step prediction:
wherein,is an estimate of the system state vector X (k), and
Figure BDA0002216820040000103
is 0.
(2) Prediction covariance matrix:
P(k+1|k)=AP(k|k)AT+GQGT
(3) a filter gain matrix:
Figure BDA0002216820040000104
(4) and (3) updating the state:
Figure BDA0002216820040000105
(5) and (3) updating the covariance matrix:
P(k+1|k+1)=[In-K(k+1)H]P(k+1|k)
the kinematic equation for establishing the linear reciprocating motion is as follows:
Figure BDA0002216820040000106
where Δ T represents the interval of the sampling time.
The state vector of the expanded system is thus obtained as:
Figure BDA0002216820040000107
the observed values are:
Z(k)=[x(k)]
the state transition matrix is:
Figure BDA0002216820040000111
the observation matrix is:
H=[1 0]
therefore, a Kalman filtering model is established to filter the measured coordinate information.
And 300, calculating the displacement of the bridge according to the center coordinates.
According to the technical scheme of the embodiment of the invention, after the target image is obtained, a series of processing including image segmentation, rectangular contour extraction, binarization and code extraction is carried out, so that the size of the image is reduced, the image processing speed is improved, the code information of the two-dimensional code pattern is accurately obtained through binarization and code extraction, the identification speed and the identification precision of the two-dimensional code pattern are improved, and a good basis is laid for subsequent displacement calculation; the displacement of the bridge is determined by detecting the angular points and determining the coordinates of the characteristic points, and the coordinates of the characteristic points are corrected by Kalman filtering, so that the precision of displacement calculation is improved; the automatic monitoring of bridge displacement is realized, the monitoring precision is high, the anti-interference capability is strong, the bridge displacement monitoring device is not easily interfered by the environment, and the application range is wide.
EXAMPLE III
Fig. 3 is a flowchart of a method for monitoring bridge displacement based on visual perception according to a third embodiment of the present invention, which is a further refinement of the first embodiment, and as shown in fig. 3, the method for monitoring bridge displacement according to the present embodiment includes:
step 310, a target image is obtained, wherein the target image comprises a rectangular two-dimensional code pattern.
And 320, converting the two-dimensional code pattern into a two-dimensional code pattern with a front view by calculating a homography matrix.
Step 330, dividing the pixels of the two-dimensional code pattern with the front view direction into a high threshold class larger than or equal to a set threshold value and a low threshold class smaller than the set threshold value according to the Otsu method.
The Otsu method, also called the variance method between maximum classes or OTSU, mainly divides the data in the image into two classes by a threshold, wherein the gray levels of the pixels of the images in one class are both smaller than the threshold, and the gray levels of the pixels of the images in the other class are both greater than or equal to the threshold. The segmentation threshold is determined by the principle that the variance between the two classes is the largest.
Specifically, the gray value of each pixel of the two-dimensional code pattern can be extracted, and the threshold with the largest inter-class variance is obtained by adopting a traversal method according to the Otsu method. And then the pixels of the image are divided into two classes according to the threshold value, namely a high threshold value class which is larger than or equal to the threshold value and a low threshold value class which is smaller than the threshold value. The image segmentation is carried out by adopting the Otsu method, and the segmentation accuracy is high.
And 340, setting the gray value of the pixel of the high threshold class to be 255, and setting the gray value of the pixel of the low threshold class to be 0, so as to obtain the two-dimensional code pattern after the binarization processing.
And 350, clockwise rotating the two-dimensional code pattern subjected to binarization processing by 90 degrees, 180 degrees and 270 degrees to obtain two-dimensional code images in four directions.
Because the two-dimensional code target is generally square and can rotate when being pasted or placed, the pattern needs to be rotated in three directions, namely clockwise 90 degrees, 180 degrees and 270 degrees, after the pattern of the two-dimensional code is binarized, so that the two-dimensional code in four directions can be obtained. The advantage that sets up like this lies in, need not to restrict pasting or placing the direction of two-dimensional code mark target, has weakened the requirement to pasting or placing, has improved the suitability of two-dimensional code mark target.
And 360, recognizing the data information of the two-dimensional code patterns in the four directions, and sequentially connecting the data of the two-dimensional code patterns in each direction into a binary code, thereby obtaining the binary codes in the four directions.
Specifically, identifying the data information of the two-dimensional code pattern in the four directions includes: gridding the two-dimensional code pattern in the four directions, and setting the data information of the grid with the pixel value of 0 as 0 and setting the data information of the grid with the pixel value of 1 as 1.
The gridding refers to gridding according to a preset size or a template, or gridding by detecting gray values of all pixel points. Namely, the two-dimensional code pattern is divided into regular black and white lattices.
After the digital information of the two-dimensional code patterns in the four directions is identified, codes are respectively extracted from the two-dimensional code patterns in each direction, and specifically, the data information of the two-dimensional code images are sequentially connected into a binary code according to a certain sequence. May be in left-to-right, top-to-bottom order, or may be in other designated orders.
And step 370, when one of the binary codes in the four directions is matched with a preset binary code, determining that the two-dimensional code pattern is a target two-dimensional code pattern.
It should be understood that each two-dimensional code target generates a two-dimensional code pattern according to a certain algorithm when generating, and obtains a preset binary code of the two-dimensional code pattern.
Because the device of shooting the target is far away from the interval distance between the target and the target, and can place a plurality of two-dimensional code targets on the bridge and number each two-dimensional code target to monitor the displacement of each position of bridge. Therefore, a plurality of two-dimensional code targets with different numbers may appear on one target image at the same time. By matching the binary codes of the target image, it is possible to determine whether the current two-dimensional code pattern is a desired two-dimensional code pattern or determine the code or position of the current two-dimensional code pattern.
Specifically, the correspondence between each two-dimensional code target number, the two-dimensional code pattern size, and the corresponding preset binary code may be established in advance, and whether the two-dimensional code pattern is the target two-dimensional code pattern is determined according to the correspondence and the binary codes in the four directions. When one of the binary codes in the four directions is matched with a preset binary code in the corresponding relation table, the number of the two-dimensional code target and the size information of the two-dimensional code pattern are obtained, and accordingly the two-dimensional code pattern is determined to be the target two-dimensional code pattern. Meanwhile, a plurality of two-dimensional code patterns and corresponding serial numbers and size information can be identified simultaneously.
And 380, after the two-dimension code pattern is determined to be the target two-dimension code pattern, identifying four corner points of the two-dimension code pattern as characteristic points.
And 390, determining the displacement of the bridge according to the coordinates of the characteristic points.
According to the technical scheme of the embodiment of the invention, the images are classified by the Otsu method, the image segmentation is carried out by searching the threshold with the largest error among the classes, and the segmentation method is accurate; the classified images are binarized, so that the data information of the images is reduced, and the extraction speed of the image information is accelerated; the two-dimensional code image is rotated in three directions, and the coding information of the two-dimensional code patterns in four directions is extracted, so that the problem of rotation when the two-dimensional code target is fixed is solved, and the efficiency and the accuracy of two-dimensional code pattern recognition are improved; the displacement of the bridge is determined by detecting the angular points and determining the coordinates of the characteristic points, so that the automatic monitoring of the bridge displacement is realized, the monitoring precision is high, the anti-interference capability is strong, the bridge is not easily interfered by the environment, and the application range is wide.
Example four
Fig. 4 is a schematic structural diagram of a monitoring device for bridge displacement based on visual perception according to a fourth embodiment of the present invention, as shown in fig. 4, the monitoring device includes: an image acquisition module 410, a feature recognition module 420, and a displacement determination module 430.
The image acquisition module 410 is configured to acquire a target image, where the target image includes a rectangular two-dimensional code pattern; a feature recognition module 420, configured to recognize feature points of the two-dimensional code pattern; and a displacement determining module 430, configured to determine the displacement of the bridge according to the coordinates of the feature points.
According to the technical scheme of the embodiment of the invention, the target image containing the two-dimension code pattern is obtained, the characteristic point of the two-dimension code group is identified, and the displacement of the bridge is determined according to the coordinate information of the characteristic point, so that the real-time monitoring of the bridge displacement is realized, the monitoring method is not easily influenced by the environment, the anti-noise capability is strong, the displacement is monitored through the characteristic point of the two-dimension code pattern, the monitoring precision is high, and the application range is wide.
Optionally, the feature recognition module 420 includes:
a target pattern determination unit for determining whether the two-dimensional code pattern is a target two-dimensional code pattern;
and the characteristic point identification unit is used for identifying four corner points of the two-dimensional code pattern as characteristic points after the two-dimensional code pattern is determined to be the target two-dimensional code pattern.
Optionally, the monitoring device for bridge displacement further includes:
the image segmentation module is used for carrying out image segmentation on the target image according to an adaptive threshold algorithm;
and the rectangular outline extraction module is used for extracting a rectangular image containing a rectangular outline in the target image after image segmentation and determining that the rectangular image is the two-dimensional code pattern.
Optionally, the target pattern determining unit includes:
the view direction transformation subunit is used for transforming the two-dimensional code pattern into a two-dimensional code pattern with a front view in the view direction by calculating a homography matrix;
a binarization processing subunit, configured to perform binarization processing on the two-dimensional code pattern with the front view as a viewing direction;
and the target pattern determining subunit is used for extracting the coding information of the two-dimensional code pattern after the binarization processing, and determining whether the two-dimensional code pattern is the target two-dimensional code pattern according to the coding information.
Optionally, the target pattern determining unit further includes:
and the image dividing subunit is used for dividing the pixels of the two-dimensional code pattern with the front view as the view direction into a high threshold class which is larger than or equal to a set threshold value and a low threshold class which is smaller than the set threshold value according to the Otsu method.
Correspondingly, the binarization processing subunit is specifically configured to: and setting the gray value of the pixel of the high threshold class as 255 and the gray value of the pixel of the low threshold class as 0 to obtain the two-dimensional code pattern after the binarization processing.
Optionally, the target pattern determination subunit is specifically configured to:
clockwise rotating the two-dimensional code pattern subjected to binarization processing by 90 degrees, 180 degrees and 270 degrees to obtain two-dimensional code patterns in four directions; recognizing data information of the two-dimensional code patterns in the four directions, and sequentially connecting the data of the two-dimensional code patterns in each direction into a binary code, so as to obtain the binary codes in the four directions; and when one of the binary codes in the four directions is matched with a preset binary code, determining that the two-dimensional code pattern is a target two-dimensional code pattern.
Optionally, the rectangular contour extraction module is specifically configured to:
and extracting a rectangular image containing a rectangular outline in the target image after the image segmentation by performing polygon approximation on the target image after the image segmentation.
Optionally, the displacement determining module 430 includes:
the pose determining unit is used for determining the pose of the device for shooting the target image according to the projection relation between the coordinates of the target image of the four corner points and the coordinates under the coordinate system of the device for shooting the target image;
and the displacement calculation unit is used for determining the coordinate of the central point of the two-dimensional code pattern in a world coordinate system according to the pose and calculating the displacement of the bridge according to the coordinate of the central point in the world coordinate system.
Optionally, the displacement calculating unit is specifically configured to:
acquiring a conversion relation between a coordinate system of the device for shooting the target image and the world coordinate system; determining the coordinate of the central point of the two-dimensional code pattern under a world coordinate system according to the conversion relation and the pose; and calculating the displacement of the bridge according to the coordinates of the central point under the world coordinate system.
Optionally, the monitoring device for bridge displacement further includes:
and the image filtering module is used for filtering the target image according to a Kalman filtering algorithm before identifying the characteristic points of the two-dimensional code pattern.
The monitoring device for bridge displacement based on visual perception provided by the embodiment of the invention can execute the monitoring method for bridge displacement based on visual perception provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
EXAMPLE five
Fig. 5 is a schematic structural diagram of a monitoring system for bridge displacement based on visual perception according to a fifth embodiment of the present invention, as shown in fig. 5, the system includes: a two-dimensional code target 510, an image acquisition module 520, and an image processing module 530.
The two-dimensional code target 510 is arranged on a set plane of a bridge and is positioned on the same side of the bridge as the image acquisition module, wherein the two-dimensional code target comprises a two-dimensional code pattern; the image acquisition module 520 is used for acquiring a target image of the two-dimensional code target and sending the target image to the image processing module; the image processing module 530 is configured to receive the target image, identify a feature point of the target image, and determine a displacement of the bridge according to a coordinate of the feature point.
Optionally, the image acquisition module includes a monocular lens and an industrial camera.
Optionally, the image capturing module further comprises a tripod for fixing the industrial camera and the monocular lens.
Optionally, the image processing module 530 is further configured to execute the method for monitoring the bridge displacement according to any embodiment of the present invention.
EXAMPLE six
Fig. 6 is a schematic structural diagram of an apparatus according to a sixth embodiment of the present invention, as shown in fig. 6, the apparatus includes a processor 610, a memory 620, an input device 630, and an output device 640; the number of the device processors 610 may be one or more, and one processor 610 is taken as an example in fig. 6; the processor 610, the memory 620, the input device 630 and the output device 640 in the apparatus may be connected by a bus or other means, and fig. 6 illustrates an example of a connection by a bus.
The memory 620 is used as a computer-readable storage medium for storing software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the monitoring method for bridge displacement based on visual perception in the embodiment of the present invention (for example, the image obtaining module 410, the feature recognition module 420, and the displacement determination module 430 in the monitoring device for bridge displacement based on visual perception). The processor 610 executes various functional applications and data processing of the device/terminal/server by running software programs, instructions and modules stored in the memory 620, so as to implement the monitoring method of bridge displacement based on visual perception.
The memory 620 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal, and the like. Further, the memory 620 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples, the memory 620 may further include memory located remotely from the processor 610, which may be connected to the device/terminal/server via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input means 630 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function controls of the device. The output device 640 may include a display device such as a display screen.
It should be noted that, in the embodiment of the monitoring device for bridge displacement based on visual perception, the included units and modules are only divided according to functional logic, but are not limited to the above division, as long as the corresponding functions can be realized; in addition, specific names of the functional units are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present invention.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (13)

1. A bridge displacement monitoring method based on visual perception is characterized by comprising the following steps:
acquiring a target image, wherein the target image comprises a rectangular two-dimensional code pattern;
identifying characteristic points of the two-dimensional code pattern;
and determining the displacement of the bridge according to the coordinates of the characteristic points.
2. The method of claim 1, wherein the two-dimensional code pattern is an ArUco mark.
3. The method of claim 1, wherein the identifying the feature points of the two-dimensional code pattern comprises:
determining whether the two-dimension code pattern is a target two-dimension code pattern;
and after the two-dimension code pattern is determined to be the target two-dimension code pattern, identifying four corner points of the two-dimension code pattern as characteristic points.
4. The method of claim 3, before determining that the two-dimensional code pattern is a target two-dimensional code pattern, further comprising:
performing image segmentation on the target image according to an adaptive threshold algorithm;
and extracting a rectangular image containing a rectangular outline in the target image after image segmentation, and determining that the rectangular image is the two-dimensional code pattern.
5. The method of claim 4, wherein determining that the two-dimensional code pattern is a target two-dimensional code pattern comprises:
converting the two-dimensional code pattern into a two-dimensional code pattern with a front view direction by calculating a homography matrix;
carrying out binarization processing on the two-dimensional code pattern with the front view direction;
and extracting the coding information of the two-dimensional code pattern after binarization processing, and determining whether the two-dimensional code pattern is a target two-dimensional code pattern according to the coding information.
6. The method according to claim 5, wherein after converting the two-dimensional code pattern into a two-dimensional code pattern with a front view, further comprising:
dividing pixels of the two-dimensional code pattern with the front view direction into a high threshold class larger than or equal to a set threshold value and a low threshold class smaller than the set threshold value according to the Otsu method;
correspondingly, the binarization processing is carried out on the two-dimensional code pattern with the front view direction, and the binarization processing comprises the following steps: and setting the gray value of the pixel of the high threshold class as 255 and the gray value of the pixel of the low threshold class as 0 to obtain the two-dimensional code pattern after the binarization processing.
7. The method according to claim 5, wherein extracting coding information of the two-dimensional code pattern after binarization processing, and determining whether the two-dimensional code pattern is a target two-dimensional code pattern according to the coding information comprises:
clockwise rotating the two-dimensional code pattern subjected to binarization processing by 90 degrees, 180 degrees and 270 degrees to obtain two-dimensional code patterns in four directions;
recognizing data information of the two-dimensional code patterns in the four directions, and sequentially connecting the data of the two-dimensional code patterns in each direction into a binary code, so as to obtain the binary codes in the four directions;
and when one of the binary codes in the four directions is matched with a preset binary code, determining that the two-dimensional code pattern is a target two-dimensional code pattern.
8. The method according to claim 4, wherein the extracting the rectangular image having the rectangular outline in the target image after the image segmentation comprises:
and extracting a rectangular image containing a rectangular outline in the target image after the image segmentation by performing polygon approximation on the target image after the image segmentation.
9. The method of claim 3, wherein determining the displacement of the bridge from the coordinates of the feature points comprises:
determining the pose of the device for shooting the target image according to the projection relation between the coordinates of the target image at the four corner points and the coordinates of the device for shooting the target image in a coordinate system;
and determining the central coordinate of the central point of the two-dimensional code pattern under a world coordinate system according to the pose, and calculating the displacement of the bridge according to the central coordinate.
10. The method according to claim 9, wherein determining the central coordinate of the central point of the two-dimensional code pattern under a world coordinate system according to the pose comprises:
acquiring a conversion relation between a coordinate system of the device for shooting the target image and the world coordinate system;
and determining the central coordinate of the central point of the two-dimensional code pattern under a world coordinate system according to the conversion relation and the pose.
11. The method according to claim 9, after determining coordinates of a center point of the two-dimensional code pattern in a world coordinate system according to the pose, further comprising:
and filtering the center coordinate according to a Kalman filtering algorithm to update the center coordinate.
12. A monitoring device of bridge displacement based on visual perception is characterized by comprising:
the image acquisition module is used for acquiring a target image, wherein the target image comprises a rectangular two-dimensional code pattern;
the characteristic identification module is used for identifying characteristic points of the two-dimensional code pattern;
and the displacement determining module is used for determining the displacement of the bridge according to the coordinates of the characteristic points.
13. An apparatus, characterized in that the apparatus comprises:
one or more processors;
a memory for storing one or more programs;
when executed by the one or more processors, cause the one or more processors to implement the method for monitoring bridge displacement based on visual perception according to any one of claims 1-11.
CN201910918209.9A 2019-09-26 2019-09-26 Bridge deformation monitoring method, device and equipment based on visual perception Pending CN110634138A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910918209.9A CN110634138A (en) 2019-09-26 2019-09-26 Bridge deformation monitoring method, device and equipment based on visual perception

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910918209.9A CN110634138A (en) 2019-09-26 2019-09-26 Bridge deformation monitoring method, device and equipment based on visual perception

Publications (1)

Publication Number Publication Date
CN110634138A true CN110634138A (en) 2019-12-31

Family

ID=68973154

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910918209.9A Pending CN110634138A (en) 2019-09-26 2019-09-26 Bridge deformation monitoring method, device and equipment based on visual perception

Country Status (1)

Country Link
CN (1) CN110634138A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111784785A (en) * 2020-05-28 2020-10-16 河海大学 A method of bridge dynamic displacement identification
CN112508982A (en) * 2020-12-04 2021-03-16 杭州鲁尔物联科技有限公司 Method for monitoring displacement of dam in hillside pond based on image recognition
CN112885096A (en) * 2021-02-05 2021-06-01 同济大学 Bridge floor traffic flow full-view-field sensing system and method depending on bridge arch ribs
CN113192063A (en) * 2021-05-25 2021-07-30 中铁第四勘察设计院集团有限公司 Bridge linear monitoring system and bridge linear monitoring method
CN115289982A (en) * 2022-09-28 2022-11-04 天津大学建筑设计规划研究总院有限公司 A visual monitoring method of structural plane displacement based on ArUco code
CN115355826A (en) * 2022-07-20 2022-11-18 上海同禾工程科技股份有限公司 A bridge monitoring system
CN116189938A (en) * 2022-12-23 2023-05-30 中国核动力研究设计院 Image method measuring system and method for measuring bending and twisting of nuclear fuel assembly
CN116678337A (en) * 2023-06-08 2023-09-01 交通运输部公路科学研究所 Monitoring and early warning system and method for the height difference at the front and rear fulcrums of the main girder of the bridge erecting machine and the deformation of the main girder based on image recognition

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106295641A (en) * 2016-08-09 2017-01-04 鞍钢集团矿业有限公司 A kind of slope displacement automatic monitoring method based on image SURF feature
CN107609451A (en) * 2017-09-14 2018-01-19 斯坦德机器人(深圳)有限公司 A kind of high-precision vision localization method and system based on Quick Response Code
CN107907064A (en) * 2017-11-03 2018-04-13 西安元智系统技术有限责任公司 A kind of monitoring fractures system and method
CN108775872A (en) * 2018-06-26 2018-11-09 南京理工大学 Deflection of bridge span detection method based on autozoom scan picture
JP2019152498A (en) * 2018-03-01 2019-09-12 株式会社共和電業 Out-of-plane displacement measuring method using two-dimensional grating pattern and device therefor

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106295641A (en) * 2016-08-09 2017-01-04 鞍钢集团矿业有限公司 A kind of slope displacement automatic monitoring method based on image SURF feature
CN107609451A (en) * 2017-09-14 2018-01-19 斯坦德机器人(深圳)有限公司 A kind of high-precision vision localization method and system based on Quick Response Code
CN107907064A (en) * 2017-11-03 2018-04-13 西安元智系统技术有限责任公司 A kind of monitoring fractures system and method
JP2019152498A (en) * 2018-03-01 2019-09-12 株式会社共和電業 Out-of-plane displacement measuring method using two-dimensional grating pattern and device therefor
CN108775872A (en) * 2018-06-26 2018-11-09 南京理工大学 Deflection of bridge span detection method based on autozoom scan picture

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
张国良等: "《移动机器人的SLAM与VSLAM方法》", 31 October 2018 *
裴耀东: "基于数字图像处理的桥梁结构裂缝与位移测量研究", 《中国优秀博硕士学位论文全文数据库(硕士)工程科技Ⅱ辑》 *
赵文一: "无人机视觉辅助自主降落系统研究", 《中国优秀博硕士学位论文全文数据库(硕士)工程科技Ⅱ辑》 *
高宏伟 等: "《电子制造装备技术》", 30 September 2015 *
黄建坤: "基于图像序列的桥梁形变位移测量方法", 《中国优秀博硕士学位论文全文数据库(硕士)工程科技Ⅱ辑》 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111784785A (en) * 2020-05-28 2020-10-16 河海大学 A method of bridge dynamic displacement identification
CN111784785B (en) * 2020-05-28 2022-08-12 河海大学 A bridge dynamic displacement identification method
CN112508982A (en) * 2020-12-04 2021-03-16 杭州鲁尔物联科技有限公司 Method for monitoring displacement of dam in hillside pond based on image recognition
CN112508982B (en) * 2020-12-04 2024-10-18 杭州鲁尔物联科技有限公司 Image recognition-based mountain pond dyke displacement monitoring method
CN112885096A (en) * 2021-02-05 2021-06-01 同济大学 Bridge floor traffic flow full-view-field sensing system and method depending on bridge arch ribs
CN113192063B (en) * 2021-05-25 2024-02-02 中铁第四勘察设计院集团有限公司 Bridge line-shaped monitoring system and bridge line-shaped monitoring method
CN113192063A (en) * 2021-05-25 2021-07-30 中铁第四勘察设计院集团有限公司 Bridge linear monitoring system and bridge linear monitoring method
CN115355826A (en) * 2022-07-20 2022-11-18 上海同禾工程科技股份有限公司 A bridge monitoring system
CN115289982A (en) * 2022-09-28 2022-11-04 天津大学建筑设计规划研究总院有限公司 A visual monitoring method of structural plane displacement based on ArUco code
CN116189938A (en) * 2022-12-23 2023-05-30 中国核动力研究设计院 Image method measuring system and method for measuring bending and twisting of nuclear fuel assembly
CN116189938B (en) * 2022-12-23 2024-02-27 中国核动力研究设计院 Image method measuring system and method for measuring bending and twisting of nuclear fuel assembly
CN116678337A (en) * 2023-06-08 2023-09-01 交通运输部公路科学研究所 Monitoring and early warning system and method for the height difference at the front and rear fulcrums of the main girder of the bridge erecting machine and the deformation of the main girder based on image recognition
CN116678337B (en) * 2023-06-08 2024-10-01 交通运输部公路科学研究所 Image recognition-based monitoring and early warning system and method for height difference and main beam deformation of bridge erection machine main beam at front and rear support points

Similar Documents

Publication Publication Date Title
CN110634138A (en) Bridge deformation monitoring method, device and equipment based on visual perception
CN110568447B (en) Visual positioning method, device and computer readable medium
CN111179358B (en) Calibration method, device, equipment and storage medium
CN111612760B (en) Method and device for detecting obstacles
CN111815707B (en) Point cloud determining method, point cloud screening method, point cloud determining device, point cloud screening device and computer equipment
CN113592989A (en) Three-dimensional scene reconstruction system, method, equipment and storage medium
CN110163025A (en) Two dimensional code localization method and device
JP7092615B2 (en) Shadow detector, shadow detection method, shadow detection program, learning device, learning method, and learning program
CN115376109B (en) Obstacle detection method, obstacle detection device, and storage medium
CN111815672A (en) Dynamic tracking control method, device and control equipment
CN112132900A (en) Visual repositioning method and system
CN112509058B (en) External parameter calculating method, device, electronic equipment and storage medium
CN112613107B (en) Method, device, storage medium and equipment for determining construction progress of pole and tower engineering
CN113984037A (en) Semantic map construction method based on target candidate box in any direction
CN113792645A (en) AI eyeball fusing image and laser radar
CN112686962A (en) Indoor visual positioning method and device and electronic equipment
CN112364693A (en) Barrier identification method, device and equipment based on binocular vision and storage medium
CN117711130A (en) Factory safety production supervision method and system based on 3D modeling and electronic equipment
CN118115582A (en) Method and system for identifying pose of crane hook of monocular camera tower based on YOLO
CN113628251B (en) Smart hotel terminal monitoring method
CN110110767A (en) A kind of characteristics of image optimization method, device, terminal device and readable storage medium storing program for executing
CN113313764B (en) Positioning method, positioning device, electronic equipment and storage medium
CN111738906B (en) Indoor road network generation method and device, storage medium and electronic equipment
CN116012227A (en) Image processing method, device, storage medium and processor
KR102249380B1 (en) System for generating spatial information of CCTV device using reference image information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: Room 206-002, 2 / F, building 8, Xixi bafangcheng, Wuchang Street, Yuhang District, Hangzhou City, Zhejiang Province

Applicant after: HANGZHOU RUHR TECHNOLOGY Co.,Ltd.

Address before: A4-4-201, No. 643, Shuangliu, Zhuantang street, Xihu District, Hangzhou City, Zhejiang Province

Applicant before: HANGZHOU RUHR TECHNOLOGY Co.,Ltd.

RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20191231