CN111964680A - Real-time positioning method of inspection robot - Google Patents
Real-time positioning method of inspection robot Download PDFInfo
- Publication number
- CN111964680A CN111964680A CN202010746224.2A CN202010746224A CN111964680A CN 111964680 A CN111964680 A CN 111964680A CN 202010746224 A CN202010746224 A CN 202010746224A CN 111964680 A CN111964680 A CN 111964680A
- Authority
- CN
- China
- Prior art keywords
- dimensional code
- positioning data
- monocular camera
- uwb
- determining
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000007689 inspection Methods 0.000 title claims abstract description 76
- 238000000034 method Methods 0.000 title claims abstract description 51
- 238000012937 correction Methods 0.000 claims abstract description 5
- 239000011159 matrix material Substances 0.000 claims description 24
- 238000003860 storage Methods 0.000 claims description 11
- 238000012545 processing Methods 0.000 claims description 10
- 238000013519 translation Methods 0.000 claims description 10
- 238000004422 calculation algorithm Methods 0.000 claims description 8
- 230000009466 transformation Effects 0.000 claims description 4
- 238000004590 computer program Methods 0.000 claims description 3
- 230000006872 improvement Effects 0.000 description 7
- 238000006243 chemical reaction Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 5
- 238000005259 measurement Methods 0.000 description 5
- 230000009286 beneficial effect Effects 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 4
- 230000014509 gene expression Effects 0.000 description 4
- 230000000007 visual effect Effects 0.000 description 4
- 230000004927 fusion Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 230000004888 barrier function Effects 0.000 description 1
- 230000002457 bidirectional effect Effects 0.000 description 1
- 229910002056 binary alloy Inorganic materials 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000000691 measurement method Methods 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/005—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/20—Instruments for performing navigational calculations
- G01C21/206—Instruments for performing navigational calculations specially adapted for indoor navigation
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S5/00—Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
- G01S5/02—Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using radio waves
- G01S5/0257—Hybrid positioning
- G01S5/0263—Hybrid positioning by combining or switching between positions derived from two or more separate positioning systems
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S5/00—Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
- G01S5/02—Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using radio waves
- G01S5/0257—Hybrid positioning
- G01S5/0263—Hybrid positioning by combining or switching between positions derived from two or more separate positioning systems
- G01S5/0264—Hybrid positioning by combining or switching between positions derived from two or more separate positioning systems at least one of the systems being a non-radio wave positioning system
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/22—Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
- G06V10/225—Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition based on a marking or identifier characterising the area
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/467—Encoded features or binary features, e.g. local binary patterns [LBP]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
The invention discloses a real-time positioning method of an inspection robot, which comprises the following steps: each monocular camera identifies the two-dimensional code and determines whether each monocular camera can identify the two-dimensional code label; when each monocular camera can identify the two-dimensional code label, acquiring the identification result of each monocular camera on the two-dimensional code label, determining first positioning data of each monocular camera based on each identification result, and determining the real-time position of the inspection robot based on each first positioning data; when at least one monocular camera can not discern the two-dimensional code label, acquire the UWB location data to the UWB label, remove and patrol and examine the robot and acquire the second location data of each monocular camera, adopt each second location data to compensate and correct the UWB location data, confirm the real-time position of patrolling and examining the robot based on the third location data after compensation and correction. The invention combines UWB positioning and monocular vision positioning, and realizes high-precision positioning while reducing cost.
Description
Technical Field
The invention relates to the technical field of robots, in particular to a real-time positioning method of an inspection robot.
Background
The real-time navigation function of the inspection robot requires the system to determine the next driving direction according to the current position of the inspection robot, which requires high positioning accuracy. The difficulty of the positioning technology of the inspection robot mainly comprises two aspects: firstly, the navigation technology, for complex environmental factors, the positioning accuracy can be different due to different navigation technologies; secondly, the data fusion technology is adopted, and when the data collected by the sensors are fused according to different precision requirements, the calculation process is very complicated.
In recent years, with the continuous development of the intelligent manufacturing field, higher requirements are put forward on the positioning accuracy and flexible configuration of the inspection robot, and laser radar, visual navigation, QR code navigation and SLAM navigation are applied to different fields. At present, most of routing inspection robot positioning methods adopt laser radars, which can help a routing inspection robot to construct an indoor environment map for a completely unknown indoor environment through LiDAR and other core sensors and realize autonomous navigation of the routing inspection robot, wherein the indoor positioning precision is about 20mm, but because the price of the laser radars is high, part of the laser radars account for one third of the total cost of the routing inspection robot. In addition, the SLAM technology is generally adopted for constructing the indoor environment map, and mainly comprises Visual SLAM (VSLAM) and LiDAR SLAM. The VSLAM is a device that uses a depth camera such as Kinect to navigate in an indoor environment, and the working principle of the VSLAM is to optically process the environment around the inspection robot. The camera collects image information, and the processor links the collected image information to the actual position of the inspection robot, so that autonomous navigation and positioning of the inspection robot are completed. The calculated amount of the VSLAM is too large, the performance requirement on the inspection robot system is high, and a map generated by the VSLAM is usually a point cloud and cannot be directly applied to path planning of the inspection robot.
Disclosure of Invention
In order to solve the above problems, an object of the present invention is to provide a real-time positioning method for an inspection robot, which combines UWB positioning and monocular vision positioning to achieve high-precision positioning while reducing costs.
The invention provides a real-time positioning method of an inspection robot, which comprises the following steps:
identifying the two-dimensional code by each indoor monocular camera, and determining whether each monocular camera can identify the two-dimensional code label;
when the two-dimension code labels can be identified by the monocular cameras, the identification results of the monocular cameras for the two-dimension code labels are obtained, the first positioning data of the monocular cameras are determined based on the identification results of the monocular cameras for the two-dimension code labels, and the real-time position of the inspection robot is determined based on the first positioning data of the monocular cameras;
when at least one monocular camera cannot identify the two-dimensional code label, acquiring UWB positioning data of the UWB label, moving the inspection robot and acquiring second positioning data of each monocular camera, compensating and correcting the UWB positioning data of the UWB label by adopting the second positioning data of each monocular camera, and determining the real-time position of the inspection robot based on the compensated and corrected third positioning data;
wherein, the two-dimensional code label with the UWB label is located patrols and examines robot.
As a further improvement of the present invention, when each monocular camera can identify the two-dimensional code tag, acquiring an identification result of the two-dimensional code tag by each monocular camera, and determining first positioning data of each monocular camera based on the identification result of the two-dimensional code tag by each monocular camera, includes:
acquiring a two-dimensional code image shot by a monocular camera on the two-dimensional code label;
acquiring the identification result of the monocular camera on the two-dimension code label based on the two-dimension code image;
determining first positioning data of the monocular camera based on the identification result of the two-dimension code label;
and analogizing in turn, respectively determining the first positioning data of each monocular camera.
As a further improvement of the present invention, acquiring, based on the two-dimensional code image, a recognition result of the monocular camera on the two-dimensional code tag includes:
carrying out binarization processing on the two-dimensional code image to obtain a first binary image;
extracting the outline of the two-dimensional code from the first binary image;
based on the extracted contour, carrying out perspective transformation on the first binary image to obtain a second binary image;
acquiring a white bit and a black bit of the second binary image through OTSU binarization processing;
and identifying the two-dimensional code label according to the white bit and the black bit of the second binary image to obtain an identification result of the two-dimensional code label.
As a further improvement of the present invention, identifying the two-dimensional code label according to the white bit and the black bit of the second binary image to obtain an identification result of the two-dimensional code label includes:
determining the dictionary type of the two-dimensional code label according to the white bit and the black bit of the second binary image;
and searching the dictionary type of the two-dimension code label from a dictionary library to obtain the identification result of the two-dimension code label.
As a further improvement of the present invention, determining the first positioning data of the monocular camera based on the recognition result of the two-dimensional code tag includes:
sequentially acquiring image coordinate values of four corners of the two-dimensional code image in a clockwise direction by taking the mark point of the two-dimensional code image as a starting point, wherein the mark point of the two-dimensional code image is one of the four corners of the two-dimensional code image;
determining a rotation matrix R and a translation matrix T of an image coordinate system relative to a world coordinate system through a P3P algorithm;
obtaining world coordinate values of four corner points of the two-dimensional code image according to the rotation matrix R, the translation matrix T and image coordinate values of the four corner points of the two-dimensional code image;
and determining the world coordinate value of the two-dimensional code image as the first positioning data of the monocular camera according to the world coordinate values of the four corner points of the two-dimensional code image.
As a further improvement of the present invention, the determining the real-time position of the inspection robot based on the first positioning data of the monocular cameras includes:
determining a target positioning result according to each first positioning data;
and taking the target positioning result as the real-time position of the inspection robot.
As a further improvement of the present invention, when the monocular cameras cannot recognize the two-dimensional code tag, the UWB positioning data of the UWB tag is acquired, the inspection robot is moved and the second positioning data of the monocular cameras is acquired, the UWB positioning data of the UWB tag is compensated and corrected using the second positioning data of the monocular cameras, and the real-time position of the inspection robot is determined based on the compensated and corrected third positioning data, including:
acquiring at least three groups of UWB positioning data of the UWB tag, and determining a first average (x) of the at least three groups of UWB positioning datau,yu);
Moving the inspection robot, acquiring at least three groups of second positioning data of each monocular camera, and determining a second average (x) of the at least three groups of second positioning datac,yc);
Based on the first average value (x)u,yu) And said second average value (x)c,yc) Determining an error value (x,y)=((xc-xu),(yc-yu));
Using said error value (x,y) Another set of UWB positioning data (x) for the UWB tagm,ym) Performing compensation correction to obtain third positioning data (x)r,yr)=(xm+x,ym+y) And the real-time position is used as the real-time position of the inspection robot.
As a further improvement of the invention, the method is also used for positioning a plurality of inspection robots simultaneously, wherein each inspection robot is provided with a different two-dimensional code tag and a different UWB tag.
The invention also provides an electronic device comprising a memory and a processor, the memory storing one or more computer instructions, wherein the one or more computer instructions are executed by the processor to implement the method.
The invention also provides a computer-readable storage medium having stored thereon a computer program for execution by a processor to perform the method.
The invention has the beneficial effects that:
the method of the invention utilizes the monocular camera to identify the two-dimensional code label on the inspection robot, and realizes the high-precision positioning of the inspection robot through the conversion of the world coordinate system, the image coordinate system and the camera coordinate system, and the positioning precision can reach centimeter level. Simultaneously, combine UWB location data, to the unable region that covers of monocular camera, carry out data fusion, finally realize indoor full coverage high accuracy location, positioning accuracy can reach 15 mm. The invention can also realize the simultaneous positioning of a plurality of inspection robots.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It is obvious that the drawings in the following description are only some embodiments of the invention, and that for a person skilled in the art, other drawings can be derived from them without inventive effort.
Fig. 1 is a schematic flowchart of a real-time positioning method for an inspection robot according to an exemplary embodiment of the present invention;
FIG. 2 is a schematic view of the positioning of a monocular camera according to an exemplary embodiment of the present invention;
fig. 3 is a schematic diagram of posture conversion according to an exemplary embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that, if directional indications (such as up, down, left, right, front, and back … …) are involved in the embodiment of the present invention, the directional indications are only used to explain the relative positional relationship between the components, the movement situation, and the like in a specific posture (as shown in the drawing), and if the specific posture is changed, the directional indications are changed accordingly.
In addition, in the description of the present invention, the terms used are for illustrative purposes only and are not intended to limit the scope of the present invention. The terms "comprises" and/or "comprising" are used to specify the presence of stated elements, steps, operations, and/or components, but do not preclude the presence or addition of one or more other elements, steps, operations, and/or components. The terms "first," "second," and the like may be used to describe various elements, not necessarily order, and not necessarily limit the elements. In addition, in the description of the present invention, "a plurality" means two or more unless otherwise specified. These terms are only used to distinguish one element from another. These and/or other aspects will become apparent to those of ordinary skill in the art in view of the following drawings, and the description of the embodiments of the present invention will be more readily understood by those of ordinary skill in the art. The drawings are only for purposes of illustrating the described embodiments of the invention. One skilled in the art will readily recognize from the following description that alternative embodiments of the structures and methods illustrated in the present application may be employed without departing from the principles described in the present application.
The real-time positioning method of the inspection robot in the embodiment of the invention is shown in fig. 1 and comprises the following steps:
s1, identifying the two-dimensional code by each indoor monocular camera, and determining whether each monocular camera can identify the two-dimensional code label;
s2, when the two-dimension code labels can be identified by the monocular cameras, obtaining the identification results of the indoor monocular cameras on the two-dimension code labels;
s3, determining first positioning data of each monocular camera based on the identification result of each monocular camera to the two-dimensional code label;
s4, determining the real-time position of the inspection robot based on the first positioning data of each monocular camera;
s5, when the monocular cameras cannot identify the two-dimensional code labels, acquiring UWB positioning data of the UWB labels;
s6, moving the inspection robot and acquiring second positioning data of each monocular camera;
s7, compensating and correcting the UWB positioning data of the UWB tag by adopting the second positioning data of each monocular camera;
s8, determining the real-time position of the inspection robot based on the compensated and corrected third positioning data;
wherein, the two-dimensional code label with the UWB label is located patrols and examines the robot, the two-dimensional code label include black frame and binary system matrix in the black frame.
The method combines UWB (ultra wide band) positioning and monocular vision positioning, when each monocular camera can shoot a two-dimensional code label, the real-time coordinate of the inspection robot is determined according to the positioning data of each monocular camera, when at least one monocular camera is shielded (for example, one monocular camera is shielded, for example, two monocular cameras are shielded simultaneously) and cannot shoot the two-dimensional code label, the positioning data of the UWB label is utilized to perform auxiliary positioning on the inspection robot, the positioning data of the UWB label is compensated and corrected, and the compensated and corrected positioning data is used as the real-time coordinate of the inspection robot.
The monocular cameras may be four, for example, and are respectively disposed on four indoor walls, and the field of view (FOV) of the cameras may cover most of the room. The invention does not limit the number and the position of the monocular cameras specifically, and the specific installation position of the monocular camera can be adjusted according to the size of the room and the visual field of the camera.
In an optional implementation manner, acquiring an identification result of the two-dimensional code tag by each monocular camera, and determining first positioning data of each monocular camera based on the identification result of the two-dimensional code tag by each monocular camera includes:
acquiring a two-dimensional code image shot by a monocular camera on the two-dimensional code label;
acquiring the identification result of the monocular camera on the two-dimension code label based on the two-dimension code image;
determining first positioning data of the monocular camera based on the identification result of the two-dimension code label;
and analogizing in turn, respectively determining the first positioning data of each monocular camera.
The two-dimensional code label comprises a black frame and a binary matrix in the black frame. The black frame is beneficial to quick detection in the two-dimensional code image, and the internal binary matrix is beneficial to quick identification and error correction of the two-dimensional code label.
In an optional implementation manner, obtaining, based on the two-dimensional code image, an identification result of the monocular camera on the two-dimensional code tag includes:
carrying out binarization processing on the two-dimensional code image to obtain a first binary image;
extracting the outline of the two-dimensional code from the first binary image;
based on the extracted contour, carrying out perspective transformation on the first binary image to obtain a second binary image;
acquiring a white bit and a black bit of the second binary image through OTSU binarization processing;
and identifying the two-dimensional code label according to the white bit and the black bit of the second binary image to obtain an identification result of the two-dimensional code label.
The OTSU is an optimal algorithm for determining a binary segmentation threshold of an image, and the method is also called a maximum inter-class difference method. The OTSU method is simple in calculation and is not influenced by the brightness and the contrast of an image. The OTSU method divides an image into a background part and a foreground part according to the gray characteristic of the image. Since the variance is a measure of the uniformity of the gray distribution, the larger the inter-class variance between the background and the foreground is, the larger the difference between the two parts constituting the image is, and the smaller the difference between the two parts is when part of the foreground is mistaken for the background or part of the background is mistaken for the foreground. Thus, a segmentation that maximizes the inter-class variance means that the probability of false positives is minimized.
In an optional implementation manner, recognizing the two-dimensional code label according to the white bits and the black bits of the second binary image to obtain a recognition result of the two-dimensional code label includes:
determining the dictionary type of the two-dimensional code label according to the white bit and the black bit of the second binary image;
and searching the dictionary type of the two-dimension code label from a dictionary library to obtain the identification result of the two-dimension code label.
The method and the device have the advantages that various dictionary types are stored in the dictionary base, and when a certain dictionary type is found in the dictionary base by the white bit and the black bit of the second binary image, the matching result of the two-dimensional code label can be obtained and used as the identification result of the two-dimensional code label.
In an optional implementation manner, determining the first positioning data of the monocular camera based on the recognition result of the two-dimensional code tag includes:
sequentially acquiring image coordinate values of four corners of the two-dimensional code image in a clockwise direction by taking the mark point of the two-dimensional code image as a starting point, wherein the mark point of the two-dimensional code image is one of the four corners of the two-dimensional code image;
determining a rotation matrix R and a translation matrix T of an image coordinate system relative to a world coordinate system through a P3P algorithm;
obtaining world coordinate values of four corner points of the two-dimensional code image according to the rotation matrix R, the translation matrix T and image coordinate values of the four corner points of the two-dimensional code image;
and determining the world coordinate value of the two-dimensional code image as the first positioning data of the monocular camera according to the world coordinate values of the four corner points of the two-dimensional code image.
As shown in fig. 2, when the monocular camera is used to perform positioning, positioning data of the image coordinate system is obtained, and it is necessary to convert the image coordinate system and the world coordinate system to obtain positioning data of the world coordinate system.
A conversion relationship among the world coordinate system, the camera coordinate system, and the image coordinate system needs to be established, wherein,
ow represents the origin of the world coordinate system w, Oc represents the origin of the camera coordinate system c, and O represents the origin of the image coordinate system O.
Description of the camera coordinate system c with respect to the world coordinate system w isThe description of the image coordinate system o with respect to the camera coordinate system c isA description of the image coordinate system o relative to the world coordinate system w can be obtained as
In the formula,representing a rotation matrix of the camera coordinate system c relative to the world coordinate system w,representing a rotation matrix of the image coordinate system o relative to the camera coordinate system c,wPoa translation matrix representing the camera coordinate system c relative to the world coordinate system w,cPorepresenting the translation matrix of the image coordinate system o relative to the camera coordinate system c.
The invention adopts P3P algorithm to carry out posture transformation to obtain a rotation matrix R and a translation matrix T, and then obtains the positioning data of the monocular camera according to the rotation matrix R and the translation matrix T. The P3P algorithm projects the 3d points to the camera coordinate system and then converges to the optical center according to the pinhole model principle. As shown in fig. 3, according to the cosine theorem:
in the formula, p1、p2、p3Three imaging points in the image coordinate system.
wherein,x and y represent world coordinate values of the corner points of the two-dimensional code image in a world coordinate system;
further obtaining:
wherein,u, v and w represent image coordinate values of corner points of the two-dimensional code image in an image coordinate system;
At three cosine valuesAndwhen the image coordinate values of the corner points of the two-dimensional code image in the image coordinate system are known, the world coordinate values of the corner points of the two-dimensional code image can be obtained through image coordinate value conversion of the corner points of the two-dimensional code image in the image coordinate system.
The marking points of the two-dimensional code image are the starting points of four corner point detection and are also the positioning coordinate points, and the four corner points are extracted clockwise. After the world coordinate values of the four corner points are obtained, a target world coordinate value needs to be further determined to be used as the world coordinate value of the two-dimensional code image, and the first positioning data of the monocular camera is obtained. For example, the world coordinate values of four corner points may be compared respectively, and the optimal solution of the world coordinate values of four corner points may be selected as the target world coordinate value. For example, an average value of world coordinate values of four corner points may be used as the target world coordinate value. For example, the target world coordinate values may also be determined by processing the world coordinate values of the four corner points through other compensation, weighting and other algorithms. The invention does not specifically limit the specific processing mode of how to determine the target world coordinate value by using the world coordinate values of the four corner points.
In an optional embodiment, determining the real-time position of the inspection robot based on the first positioning data of the monocular cameras includes:
determining a target positioning result according to each first positioning data;
and taking the target positioning result as the real-time position of the inspection robot.
The invention is provided with a plurality of monocular cameras, and each monocular camera respectively identifies the two-dimensional code label to obtain first positioning data. After obtaining each first positioning data, further determining a target positioning result as a real-time position of the inspection robot. For example, the first positioning data may be averaged, the residual may be calculated by respectively averaging the first positioning data with the average, and the first positioning data corresponding to the minimum value of the residual may be determined as the optimal positioning result, which is used as the target positioning result. For example, the first positioning data may be averaged, and the average may be used as the target positioning result. For example, the target positioning result may also be determined by calculating each first positioning data through other algorithms such as compensation and weighting. The specific processing manner of how to determine the target positioning result by using each first positioning data is not specifically limited in the present invention.
In an optional implementation manner, when the two-dimensional code tag cannot be recognized by each monocular camera, the UWB positioning data of the UWB tag is acquired, the inspection robot is moved to acquire the second positioning data of each monocular camera, the UWB positioning data of the UWB tag is compensated and corrected by using the second positioning data of each monocular camera, and the real-time position of the inspection robot is determined based on the compensated and corrected third positioning data, including:
acquiring at least three groups of UWB positioning data of the UWB tag, and determining a first average (x) of the at least three groups of UWB positioning datau,yu);
Moving the inspection robot, acquiring at least three groups of second positioning data of each monocular camera, and determining a second average (x) of the at least three groups of second positioning datac,yc);
Based on the first average value (x)u,yu) And said second average value (x)c,yc) Determining an error value (x,y)=((xc-xu),(yc-yu));
Using said error value (x,y) Another set of UWB positioning data (x) for the UWB tagm,ym) Performing compensation correction to obtain third positioning data (x)r,yr)=(xm+x,ym+y) And the real-time position is used as the real-time position of the inspection robot.
Because the visual field of the monocular camera is easily blocked by the barrier, the monocular camera cannot identify the two-dimensional code label on the body of the inspection robot. In this case, UMB positioning data of the UWB tag needs to be read as a real-time position of the inspection robot. When the monocular camera can accurately identify the two-dimensional code label, the positioning data of the monocular camera is read to serve as the real-time position of the inspection robot, so that the positioning requirement of the inspection robot in a complex environment can be met.
The UWB positioning data for the UWB tag is acquired by each UWB anchor point (e.g., four UWB anchor points) located indoors and set in a matching manner with the UWB tag, and the four UWB anchor points may be set on four indoor walls, for example. The invention adopts a bidirectional distance measurement method when acquiring the positioning data of the UWB label, measures the time of flight (TOF) from the UWB label to the UWB anchor point, and then multiplies the TOF by the light speed to acquire the distance between the UWB anchor point and the UWB label.
For example, four UWB anchor points are set in the UWB tag of the present invention during positioning, assuming that coordinates of the UWB anchor point a are xl, y1, and z1, coordinates of the UWB anchor point B are (x2, y2, and z2), coordinates of the UWB anchor point C are (x3, y3, and z3), coordinates of the UWB anchor point D are (x4, y4, and z4), and coordinates of the UWB tag to be solved are (x, y, z), then:
these equations are expanded to yield:
subtracting the expression in the 1 st row from the expressions in the 2 nd, 3 rd and 4 th rows to obtain:
wherein:
converting into a matrix multiplication form to obtain:
in the above formula, R1、R2、R3And R4Respectively represent the distance from the UWB anchor point B, UWB anchor point C and UWB anchor point D of the UWB anchor point A, UWB to the UWB tag, therefore can acquire the positioning data of the UWB tag according to the coordinates of four UWB anchor points.
Because each monocular camera and UWB label adopt different positioning methods, when at least one monocular camera can not identify the two-dimensional code label on the body of the inspection robot, the inspection robot needs to be positioned when UMB positioning data of the UWB label needs to be utilized.
Because the monocular camera and the UWB tag adopt positioning methods of millisecond level, the inspection robot moves at the speed of 0.2m/s when moving, at least three groups of continuous monocular camera positioning data can be regarded as data measured at the same position at the speed, and the UMB positioning data is compensated and corrected by utilizing the continuous monocular camera positioning data.
For example, when switching between monocular camera positioning data and UMB positioning data is performed, for exampleAcquiring the first three sets of UWB positioning data (x)u1,yu1)、(xu2,yu2)、(xu3,yu3) Determining a first average (x) of three sets of UWB positioning datau,yu) (ii) a Acquiring three sets of second positioning data (x) of the first three monocular camerasc1,yc1)、(xc2,yc2)、(xc3,yc3) Determining a second average (x) of the three sets of second positioning datac,yc) (ii) a Second average value (x)c,yc) Subtract the first average value (x)u,yu) Obtaining an error value (x,y). The calculation formula is as follows:
will (a) tox,y) Compensation factors are made to compensate and correct the UWB positioning data.
Positioning a fourth set of UWB data (x)m,ym) Compensating and correcting by the compensation factor to obtain third positioning data (x)r,yr)=(xm+x,ym+y) And the real-time position is used as the real-time position of the inspection robot.
The invention calculates the compensation factor by using the average value of the second positioning data and the average value of the UWB positioning data, and the calculation of the compensation factor by specifically adopting several groups of second positioning data and several groups of UWB positioning data is not specifically limited.
In an optional embodiment, the method is further used for positioning a plurality of inspection robots at the same time, wherein each inspection robot is provided with a different two-dimensional code tag and a different UWB tag.
The method can respectively fix the inspection robot on a plurality of indoor coordinate measuring points (for example, 30 coordinate measuring points are arranged at intervals of 120 cm), and coordinate values of the coordinate measuring points form a coordinate set. When the inspection robot runs along a preset path, the positioning data of the monocular camera and the positioning data of the UWB tag are respectively recorded, and 30 sets of static data are obtained. Wherein, repeating the measurement for multiple times (for example, 100 times) at each coordinate measuring point to obtain the positioning data of each coordinate measuring point, and calculating the euclidean distance between the positioning data of each coordinate measuring point and the coordinate value of the coordinate measuring point in the coordinate set, and regarding as the positioning error.
In the above formula, (x, y) represents coordinate values of coordinate measurement points, (x)i,yi) Representing the resulting positioning data of the coordinate measurement points, i represents the number of measurements. Through measurement, the method can realize the dynamic positioning precision of 15 mm.
The method of the invention utilizes the monocular camera to identify the two-dimensional code label on the inspection robot, and realizes the high-precision positioning of the inspection robot through the conversion of the world coordinate system, the image coordinate system and the camera coordinate system, and the positioning precision can reach centimeter level. Simultaneously, combine UWB location data, to the unable region that covers of monocular camera, carry out data fusion, finally realize indoor full coverage high accuracy location, positioning accuracy can reach 15 mm. The invention can also realize the simultaneous positioning of a plurality of inspection robots.
The disclosure also relates to an electronic device comprising a server, a terminal and the like. The electronic device includes: at least one processor; a memory communicatively coupled to the at least one processor; and a communication component communicatively coupled to the storage medium, the communication component receiving and transmitting data under control of the processor; the memory stores instructions executable by the at least one processor, and the instructions are executed by the at least one processor to implement the real-time positioning method in the above embodiments.
In an alternative embodiment, the memory is used as a non-volatile computer-readable storage medium for storing non-volatile software programs, non-volatile computer-executable programs, and modules. The processor executes various functional applications and data processing of the device, i.e., implements a real-time positioning method, by running non-volatile software programs, instructions, and modules stored in the memory.
The memory may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store a list of options, etc. Further, the memory may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some embodiments, the memory optionally includes memory located remotely from the processor, and such remote memory may be connected to the external device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
One or more modules are stored in the memory and, when executed by the one or more processors, perform the real-time positioning method of any of the method embodiments described above.
The product can execute the real-time positioning method provided by the embodiment of the application, has corresponding functional modules and beneficial effects of the execution method, and can refer to the real-time positioning method provided by the embodiment of the application without detailed technical details in the embodiment.
The present disclosure also relates to a computer-readable storage medium for storing a computer-readable program for causing a computer to perform some or all of the above embodiments of the real-time positioning method.
That is, as can be understood by those skilled in the art, all or part of the steps in the real-time positioning method according to the foregoing embodiments may be implemented by a program instructing related hardware, where the program is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Furthermore, those of ordinary skill in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the claims, any of the claimed embodiments may be used in any combination.
It will be understood by those skilled in the art that while the present invention has been described with reference to exemplary embodiments, various changes may be made and equivalents may be substituted for elements thereof without departing from the scope of the invention. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the invention without departing from the essential scope thereof. Therefore, it is intended that the invention not be limited to the particular embodiment disclosed, but that the invention will include all embodiments falling within the scope of the appended claims.
Claims (10)
1. A real-time positioning method of an inspection robot is characterized by comprising the following steps:
identifying the two-dimensional code by each indoor monocular camera, and determining whether each monocular camera can identify the two-dimensional code label;
when the two-dimension code labels can be identified by the monocular cameras, obtaining the identification results of the monocular cameras on the two-dimension code labels, determining first positioning data of the monocular cameras based on the identification results of the monocular cameras on the two-dimension code labels, and determining the real-time position of the inspection robot based on the first positioning data of the monocular cameras;
when at least one monocular camera cannot identify the two-dimensional code label, acquiring UWB positioning data of the UWB label, moving the inspection robot and acquiring second positioning data of each monocular camera, compensating and correcting the UWB positioning data of the UWB label by adopting the second positioning data of each monocular camera, and determining the real-time position of the inspection robot based on the compensated and corrected third positioning data;
wherein, the two-dimensional code label with the UWB label is located patrols and examines robot.
2. The method of claim 1, wherein when the two-dimensional code tag can be recognized by each monocular camera, acquiring a recognition result of the two-dimensional code tag by each monocular camera, and determining the first positioning data of each monocular camera based on the recognition result of the two-dimensional code tag by each monocular camera comprises:
acquiring a two-dimensional code image shot by a monocular camera on the two-dimensional code label;
acquiring the identification result of the monocular camera on the two-dimension code label based on the two-dimension code image;
determining first positioning data of the monocular camera based on the identification result of the two-dimension code label;
and analogizing in turn, respectively determining the first positioning data of each monocular camera.
3. The method of claim 2, wherein acquiring the identification result of the monocular camera on the two-dimensional code label based on the two-dimensional code image comprises:
carrying out binarization processing on the two-dimensional code image to obtain a first binary image;
extracting the outline of the two-dimensional code from the first binary image;
based on the extracted contour, carrying out perspective transformation on the first binary image to obtain a second binary image;
acquiring a white bit and a black bit of the second binary image through OTSU binarization processing;
and identifying the two-dimensional code label according to the white bit and the black bit of the second binary image to obtain an identification result of the two-dimensional code label.
4. The method of claim 3, wherein identifying the two-dimensional code label according to the white bits and the black bits of the second binary image to obtain the identification result of the two-dimensional code label comprises:
determining the dictionary type of the two-dimensional code label according to the white bit and the black bit of the second binary image;
and searching the dictionary type of the two-dimension code label from a dictionary library to obtain the identification result of the two-dimension code label.
5. The method of claim 2, wherein determining the first positioning data of the monocular camera based on the recognition result of the two-dimensional code tag comprises:
sequentially acquiring image coordinate values of four corners of the two-dimensional code image in a clockwise direction by taking the mark point of the two-dimensional code image as a starting point, wherein the mark point of the two-dimensional code image is one of the four corners of the two-dimensional code image;
determining a rotation matrix R and a translation matrix T of an image coordinate system relative to a world coordinate system through a P3P algorithm;
obtaining world coordinate values of four corner points of the two-dimensional code image according to the rotation matrix R, the translation matrix T and image coordinate values of the four corner points of the two-dimensional code image;
and determining the world coordinate value of the two-dimensional code image as the first positioning data of the monocular camera according to the world coordinate values of the four corner points of the two-dimensional code image.
6. The method of claim 1, wherein determining the real-time position of the inspection robot based on the first positioning data of the respective monocular cameras comprises:
determining a target positioning result according to each first positioning data;
and taking the target positioning result as the real-time position of the inspection robot.
7. The method of claim 1, wherein when the two-dimensional code tag cannot be recognized by at least one monocular camera, acquiring UWB positioning data of the UWB tag, moving the inspection robot and acquiring second positioning data of the monocular cameras, compensating and correcting the UWB positioning data of the UWB tag using the second positioning data of the monocular cameras, and determining a real-time position of the inspection robot based on the compensated and corrected third positioning data, comprises:
acquiring at least three groups of UWB positioning data of the UWB tag, and determining a first average (x) of the at least three groups of UWB positioning datau,yu);
Moving the inspection robot, acquiring at least three groups of second positioning data of each monocular camera, and determining a second average (x) of the at least three groups of second positioning datac,yc);
Based on the first average value (x)u,yu) And said second average value (x)c,yc) Determining an error value (x,y)=((xc-xu),(yc-yu));
Using said error value (x,y) Another set of UWB positioning data (x) for the UWB tagm,ym) Performing compensation correction to obtain third positioning data (x)r,yr)=(xm+x,ym+y) And the real-time position is used as the real-time position of the inspection robot.
8. The method of claim 1, wherein the method is further used for simultaneously positioning a plurality of inspection robots, wherein each inspection robot is provided with a different two-dimensional code tag and a different UWB tag.
9. An electronic device comprising a memory and a processor, wherein the memory is configured to store one or more computer instructions, wherein the one or more computer instructions are executed by the processor to implement the method of any one of claims 1-8.
10. A computer-readable storage medium, on which a computer program is stored, the computer program being executable by a processor for implementing the method according to any one of claims 1-8.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010746224.2A CN111964680B (en) | 2020-07-29 | 2020-07-29 | Real-time positioning method of inspection robot |
LU500407A LU500407B1 (en) | 2020-07-29 | 2021-07-07 | Real-time positioning method for inspection robot |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010746224.2A CN111964680B (en) | 2020-07-29 | 2020-07-29 | Real-time positioning method of inspection robot |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111964680A true CN111964680A (en) | 2020-11-20 |
CN111964680B CN111964680B (en) | 2021-05-18 |
Family
ID=73363123
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010746224.2A Active CN111964680B (en) | 2020-07-29 | 2020-07-29 | Real-time positioning method of inspection robot |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN111964680B (en) |
LU (1) | LU500407B1 (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113240750A (en) * | 2021-05-13 | 2021-08-10 | 中移智行网络科技有限公司 | Three-dimensional space information measuring and calculating method and device |
CN113516708A (en) * | 2021-05-25 | 2021-10-19 | 中国矿业大学 | Power transmission line inspection unmanned aerial vehicle accurate positioning system and method based on image recognition and UWB positioning fusion |
CN113642687A (en) * | 2021-07-16 | 2021-11-12 | 国网上海市电力公司 | Substation inspection indoor position calculation method integrating two-dimensional code identification and inertial system |
CN114723821A (en) * | 2022-03-18 | 2022-07-08 | 深圳市中舟智能科技有限公司 | Checkerboard label-based robot indoor positioning method and device |
CN116559172A (en) * | 2023-04-23 | 2023-08-08 | 兰州交通大学 | Unmanned aerial vehicle-based steel bridge welding seam detection method and system |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109100738A (en) * | 2018-08-20 | 2018-12-28 | 武汉理工大学 | One kind being based on reliable alignment system combined of multi-sensor information and method |
CN110163912A (en) * | 2019-04-29 | 2019-08-23 | 达泊(东莞)智能科技有限公司 | Two dimensional code pose scaling method, apparatus and system |
CN110262507A (en) * | 2019-07-04 | 2019-09-20 | 杭州蓝芯科技有限公司 | A kind of camera array robot localization method and device based on 5G communication |
CN110345937A (en) * | 2019-08-09 | 2019-10-18 | 东莞市普灵思智能电子有限公司 | Appearance localization method and system are determined in a kind of navigation based on two dimensional code |
CN110879071A (en) * | 2019-12-06 | 2020-03-13 | 成都云科新能汽车技术有限公司 | High-precision positioning system and positioning method based on vehicle-road cooperation |
US20200206921A1 (en) * | 2018-12-30 | 2020-07-02 | Ubtech Robotics Corp | Robot movement control method, apparatus and robot using the same |
-
2020
- 2020-07-29 CN CN202010746224.2A patent/CN111964680B/en active Active
-
2021
- 2021-07-07 LU LU500407A patent/LU500407B1/en active IP Right Grant
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109100738A (en) * | 2018-08-20 | 2018-12-28 | 武汉理工大学 | One kind being based on reliable alignment system combined of multi-sensor information and method |
US20200206921A1 (en) * | 2018-12-30 | 2020-07-02 | Ubtech Robotics Corp | Robot movement control method, apparatus and robot using the same |
CN110163912A (en) * | 2019-04-29 | 2019-08-23 | 达泊(东莞)智能科技有限公司 | Two dimensional code pose scaling method, apparatus and system |
CN110262507A (en) * | 2019-07-04 | 2019-09-20 | 杭州蓝芯科技有限公司 | A kind of camera array robot localization method and device based on 5G communication |
CN110345937A (en) * | 2019-08-09 | 2019-10-18 | 东莞市普灵思智能电子有限公司 | Appearance localization method and system are determined in a kind of navigation based on two dimensional code |
CN110879071A (en) * | 2019-12-06 | 2020-03-13 | 成都云科新能汽车技术有限公司 | High-precision positioning system and positioning method based on vehicle-road cooperation |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113240750A (en) * | 2021-05-13 | 2021-08-10 | 中移智行网络科技有限公司 | Three-dimensional space information measuring and calculating method and device |
CN113516708A (en) * | 2021-05-25 | 2021-10-19 | 中国矿业大学 | Power transmission line inspection unmanned aerial vehicle accurate positioning system and method based on image recognition and UWB positioning fusion |
CN113516708B (en) * | 2021-05-25 | 2024-03-08 | 中国矿业大学 | Power transmission line inspection unmanned aerial vehicle accurate positioning system and method based on image recognition and UWB positioning fusion |
CN113642687A (en) * | 2021-07-16 | 2021-11-12 | 国网上海市电力公司 | Substation inspection indoor position calculation method integrating two-dimensional code identification and inertial system |
CN114723821A (en) * | 2022-03-18 | 2022-07-08 | 深圳市中舟智能科技有限公司 | Checkerboard label-based robot indoor positioning method and device |
CN116559172A (en) * | 2023-04-23 | 2023-08-08 | 兰州交通大学 | Unmanned aerial vehicle-based steel bridge welding seam detection method and system |
Also Published As
Publication number | Publication date |
---|---|
CN111964680B (en) | 2021-05-18 |
LU500407B1 (en) | 2022-01-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111964680B (en) | Real-time positioning method of inspection robot | |
CN109345588B (en) | Tag-based six-degree-of-freedom attitude estimation method | |
CN109993793B (en) | Visual positioning method and device | |
CN110243360A (en) | Method for constructing and positioning map of robot in motion area | |
Muñoz-Bañón et al. | Targetless camera-LiDAR calibration in unstructured environments | |
CN115655262B (en) | Deep learning perception-based multi-level semantic map construction method and device | |
KR20200041355A (en) | Simultaneous positioning and mapping navigation method, device and system combining markers | |
CN108332752B (en) | Indoor robot positioning method and device | |
CN112950696B (en) | Navigation map generation method and device and electronic equipment | |
CN110766758B (en) | Calibration method, device, system and storage device | |
CN113643380B (en) | Mechanical arm guiding method based on monocular camera vision target positioning | |
WO2019136613A1 (en) | Indoor locating method and device for robot | |
CN112115913B (en) | Image processing method, device and equipment and storage medium | |
JP2017117386A (en) | Self-motion estimation system, control method and program of self-motion estimation system | |
Cvišić et al. | Recalibrating the KITTI dataset camera setup for improved odometry accuracy | |
CN110260866A (en) | A kind of robot localization and barrier-avoiding method of view-based access control model sensor | |
CN112184765A (en) | Autonomous tracking method of underwater vehicle based on vision | |
CN110515088B (en) | Odometer estimation method and system for intelligent robot | |
US10902610B2 (en) | Moving object controller, landmark, and moving object control method | |
CN113052907A (en) | Positioning method of mobile robot in dynamic environment | |
CN114998276A (en) | Robot dynamic obstacle real-time detection method based on three-dimensional point cloud | |
El Bouazzaoui et al. | Enhancing RGB-D SLAM performances considering sensor specifications for indoor localization | |
Streiff et al. | 3D3L: Deep learned 3D keypoint detection and description for LiDARs | |
CN111964681B (en) | Real-time positioning system of inspection robot | |
CN113971697A (en) | Air-ground cooperative vehicle positioning and orienting method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |