[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN117333553A - Feature-based detection robot camera offset error compensation method - Google Patents

Feature-based detection robot camera offset error compensation method Download PDF

Info

Publication number
CN117333553A
CN117333553A CN202311143967.0A CN202311143967A CN117333553A CN 117333553 A CN117333553 A CN 117333553A CN 202311143967 A CN202311143967 A CN 202311143967A CN 117333553 A CN117333553 A CN 117333553A
Authority
CN
China
Prior art keywords
images
feature
image
coordinates
gps coordinates
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311143967.0A
Other languages
Chinese (zh)
Inventor
贾鸿顺
钟新然
范崇霄
尹兆宇
桂仲成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Guimu Robot Co ltd
Original Assignee
Anhui Guimu Robot Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Guimu Robot Co ltd filed Critical Anhui Guimu Robot Co ltd
Priority to CN202311143967.0A priority Critical patent/CN117333553A/en
Publication of CN117333553A publication Critical patent/CN117333553A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/40Correcting position, velocity or attitude
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Position Fixing By Use Of Radio Waves (AREA)

Abstract

The invention discloses a characteristic-based detection robot camera offset error compensation method, which comprises the following steps: s1: the robot collects n batches of images and GPS data in an experimental field; s2: positioning and directional correcting are carried out on each image according to GPS coordinates; s3: sequencing the images according to the batch numbers, extracting features for matching, and recording feature point coordinates successfully matched; s4: converting the feature point coordinates into GPS coordinates; s5: and establishing a cost function and solving the optimal camera offset compensation parameter. The beneficial effects of the invention are as follows: by utilizing the characteristic information of the images, a matching relation between the images is established, a cost function related to the offset error compensation parameter is constructed, and the optimal offset error compensation parameter is solved by minimizing the error, so that manual calibration is replaced, and the efficiency and accuracy are improved.

Description

Feature-based detection robot camera offset error compensation method
Technical Field
The invention relates to the technical field of image processing, in particular to a characteristic-based detection robot camera offset error compensation method.
Background
At present, in highway and airport maintenance, road surface image data is automatically collected based on a road surface detection robot, images are spliced to manufacture a map, diseases are automatically identified, and a map of disease distribution is generated, so that the detection efficiency of road surface diseases can be greatly improved. The road surface detection robot is provided with a linear array camera to scan and image the road surface when collecting data, and uses GPS equipment to record positioning data in real time, so that in order to be capable of making a high-precision map, the installation position of the camera and the offset parameter of the positioning center of the GPS equipment need to be accurately calibrated to determine the accurate GPS coordinates of the camera center when collecting image data, however, in daily operation and maintenance processes, the installation position of the camera often changes, and offset errors to a certain extent occur.
At present, offset compensation parameters mainly depend on manual calibration, firstly, image data and GPS data are collected back and forth in an experiment field on the basis of known camera installation design parameters, then images are spliced and released into a map, the image dislocation distance is measured on the map, then the camera offset compensation parameters are adjusted, and the calibration is completed repeatedly until the map dislocation distance is smaller. However, the method depends on manual calibration, and needs to repeatedly adjust the offset compensation parameters of the camera, which is low in efficiency and inaccurate.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a characteristic-based offset error compensation method for a detection robot camera.
The aim of the invention is achieved by the following technical scheme: a feature-based detection robot camera offset error compensation method comprises the following steps:
s1: the detection robot collects n batches of images and GPS data in an experimental field;
s2: positioning and directional correcting are carried out on each image according to GPS coordinates;
s3: sequencing the images according to the batch numbers, extracting features for matching, and recording feature point coordinates successfully matched;
s4: converting the feature point coordinates into GPS coordinates;
s5: and establishing a cost function and solving the optimal camera offset compensation parameter.
Preferably, in step S1, the robot collects n batches back and forth in the experimental field, and the image deflection angles of different batches are 180 °.
Preferably, in step S2, the method further includes the steps of:
s21: calculating the declination of each image from successive GPS coordinates, wherein the lateral ground resolution gsd of the image h Is fixed, known as longitudinal ground resolution gsd v The GPS coordinates corresponding to the center points of the images are obtained, the GPS coordinates corresponding to the four corner points of the images are obtained, and the GPS coordinate ranges of the four corner points of the images are recorded;
s22: calculating homography transformation matrix according to GPS coordinates and pixel coordinates of four corner points, transforming the image to obtain corrected image, arranging m images, and marking the corresponding upper left corner geositting as
g i (x g ,y g )(i=1,2,...,m)。
Preferably, in step S22, the corrected image longitudinal direction is the north direction.
Preferably, in step S3, the images are sorted according to the lot numbers from small to large, and the image feature points are extracted by the SIFT feature extraction algorithm.
Preferably, in step S4, the method further includes the steps of:
s41: a pair of successfully matched images is img1 and img2, and the corresponding characteristic point is p (x 1 ,y 1 ) And p' (x) 2 ,y 2 ) Corresponding upper left corner GPS coordinate is g 1 (x g ,y g ) And g 2 (x g ,y g ) The corresponding deflection angle is y 1 And y 2 P (x) 1 ,y 1 ) Conversion of pixel coordinates to GPS coordinatesThe formula of (2) is:
s42: calculation by step S41
Preferably, in step S5, the method further comprises the steps of:
s51: after the feature points on the image img1 are converted into GPS coordinates, the coordinates are corrected by offset compensation parametersThe calculation formula is as follows:
s52: calculating GPS coordinates of the feature points on img2 after correction by adopting the step S51
S53: establishing a cost function between characteristic point pairs:
wherein M is j For the j (j=1, 2,) k pairs of matched feature points, k is the number of pairs of feature points that match successfully, and e is the error sum.
The invention has the following advantages: according to the invention, the matching relation between the images is established by utilizing the characteristic information of the images, the cost function related to the offset error compensation parameter is constructed, and the optimal offset error compensation parameter is solved by minimizing the error, so that manual calibration is replaced, and the efficiency and accuracy are improved.
Drawings
FIG. 1 is a schematic diagram of a flow of a method for detecting offset error compensation of a robot camera;
fig. 2 is a schematic diagram of the structure of image distribution of different batches.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments. The components of the embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the invention, as presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, based on the embodiments of the invention, which are apparent to those of ordinary skill in the art without inventive faculty, are intended to be within the scope of the invention.
In addition, the embodiments of the present invention and the features of the embodiments may be combined with each other without collision.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures.
In the description of the present invention, it should be noted that, directions or positional relationships indicated by terms such as "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc., are directions or positional relationships based on those shown in the drawings, or are directions or positional relationships conventionally put in use of the inventive product, or are directions or positional relationships conventionally understood by those skilled in the art, are merely for convenience of describing the present invention and for simplifying the description, and are not to indicate or imply that the apparatus or element to be referred to must have a specific direction, be constructed and operated in a specific direction, and thus should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and the like, are used merely to distinguish between descriptions and should not be construed as indicating or implying relative importance.
In the description of the present invention, it should also be noted that, unless explicitly specified and limited otherwise, the terms "disposed," "mounted," "connected," and "connected" are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the above terms in the present invention will be understood in specific cases by those of ordinary skill in the art.
In this embodiment, as shown in fig. 1, a method for compensating an offset error of a feature-based detection robot camera includes the following steps:
s1: the detection robot collects n batches of images and GPS data in an experimental field;
s2: positioning and directional correcting are carried out on each image according to GPS coordinates;
s3: sequencing the images according to the batch numbers, extracting features for matching, and recording feature point coordinates successfully matched;
s4: converting the feature point coordinates into GPS coordinates;
s5: and establishing a cost function and solving the optimal camera offset compensation parameter. By utilizing the characteristic information of the images, a matching relation between the images is established, a cost function related to the offset error compensation parameter is constructed, and the optimal offset error compensation parameter is solved by minimizing the error, so that manual calibration is replaced, and the efficiency and accuracy are improved.
Further, as shown in fig. 2, in step S1, the robot collects n batches back and forth in the experimental field, and the image deflection angles of different batches are 180 °. Specifically, b1, b2 and b3 are images of different batches, each image records a batch number, wherein the image drift angles of different batches differ by 180 °, the image drift angles of the same batch are the same, the dashed line box is an image, and the arrow is the robot travelling direction.
Still further, in step S2, the method further includes the following steps:
s21: calculating the declination of each image from successive GPS coordinates, wherein the lateral ground resolution gsd of the image h Is fixed, known as longitudinal ground resolution gsd v The GPS coordinates corresponding to the center points of the images are obtained, the GPS coordinates corresponding to the four corner points of the images are obtained, and the GPS coordinate ranges of the four corner points of the images are recorded;
s22: calculating homography transformation matrix according to GPS coordinates and pixel coordinates of four corner points, transforming the image to obtain corrected image, arranging m images, and marking the corresponding upper left corner geographic sitting as g i (x g ,y g ) (i=1, 2,) m. Preferably, in step S22, the corrected image longitudinal direction is the north direction.
In this embodiment, in step S3, the images are sorted according to the lot numbers from small to large, and the image feature points are extracted by the SIFT feature extraction algorithm. Specifically, the images are ordered according to the batch numbers from small to large, k images are randomly selected from the images in the same batch, the main effect is to reduce the calculation amount, the SIFT feature extraction algorithm is used for extracting the feature points of the images, the images adjacent to the SIFT feature extraction algorithm are inquired according to the GPS coordinate range of each image, feature matching is carried out one by one, and feature point coordinates successfully matched are recorded. In this embodiment, the setting can be made according to the actual situation.
In this embodiment, in step S4, the following steps are further included:
s41: a pair of successfully matched images is img1 and img2, and the corresponding characteristic point is p (x 1 ,y 1 ) And p' (x) 2 ,y 2 ) The corresponding upper left corner GPS coordinate is g 1 (x g ,y g ) And g 2 (x g ,y g ) The corresponding deflection angle is y 1 And y 2 P (x) 1 ,y 1 ) Conversion of pixel coordinates to GPS coordinatesThe formula of (2) is:
s42: calculation by step S41
In this embodiment, in step S5, the following steps are further included:
s51: after the feature points on the image img1 are converted into GPS coordinates, the coordinates are corrected by offset compensation parametersThe calculation formula is as follows:
s52: calculating GPS coordinates of the feature points on img2 after correction by adopting the step S51
S53: establishing a cost function between characteristic point pairs:
wherein M is j For the j (j=1, 2,) k pairs of matched feature points, k is the number of pairs of feature points that match successfully, and e is the error sum. Specifically, due to errors in camera offset parameters, it results inAnd->Inequality, therefore, by establishing a cost function for the camera offset compensation parameters, the optimal camera offset compensation parameters O (x) o ,y o ) Camera offset compensation parameter O (x o ,y o ) The camera offset compensation parameter O (x o ,y o ) Is 0.
Although the present invention has been described with reference to the foregoing embodiments, it will be apparent to those skilled in the art that modifications may be made to the embodiments described, or equivalents may be substituted for elements thereof, and any modifications, equivalents, improvements and changes may be made without departing from the spirit and principles of the present invention.

Claims (7)

1. A characteristic-based detection robot camera offset error compensation method is characterized in that: the method comprises the following steps:
s1: the detection robot collects n batches of images and GPS data in an experimental field;
s2: positioning and directional correcting are carried out on each image according to GPS coordinates;
s3: sequencing the images according to the batch numbers, extracting features for matching, and recording feature point coordinates successfully matched;
s4: converting the feature point coordinates into GPS coordinates;
s5: and establishing a cost function and solving the optimal camera offset compensation parameter.
2. The feature-based detection robot camera offset error compensation method of claim 1, wherein: in the step S1, the robot collects n batches back and forth in an experimental field, and the image deflection angles of different batches are 180 degrees different.
3. The feature-based detection robot camera offset error compensation method of claim 2, wherein: in the step S2, the method further includes the following steps:
s21: calculating the declination of each image from successive GPS coordinates, wherein the lateral ground resolution gsd of the image h Is fixed, known as longitudinal ground resolution gsd v The GPS coordinates corresponding to the center points of the images are obtained, the GPS coordinates corresponding to the four corner points of the images are obtained, and the GPS coordinate ranges of the four corner points of the images are recorded;
s22: calculating homography transformation matrix according to GPS coordinates and pixel coordinates of four corner points, transforming the image to obtain corrected image, arranging m images, and marking the corresponding upper left corner geositting as
g i (x g ,y g )(i=1,2,...,m)。
4. A method for compensating for camera offset errors of a feature-based inspection robot in accordance with claim 3, wherein: in the step S22, the corrected image longitudinal direction is the north direction.
5. The method for compensating for camera offset errors of a feature-based inspection robot of claim 4, wherein: in the step S3, the images are ordered according to the batch numbers from small to large, and the image feature points are extracted by the SIFT feature extraction algorithm.
6. The method for compensating for camera offset errors of a feature-based inspection robot of claim 5, wherein: in the step S4, the method further includes the following steps:
s41: a pair of successfully matched images is img1 and img2, and the corresponding characteristic point is p (x 1 ,y 1 ) And p' (x) 2 ,y 2 ) The corresponding upper left corner GPS coordinate is g 1 (x g ,y g ) And g 2 (x g ,y g ) The corresponding deflection angle is y 1 And y 2 P (x) 1 ,y 1 ) Conversion of pixel coordinates to GPS coordinatesThe formula of (2) is:
s42: calculation using the step S41
7. The method for compensating for camera offset errors of a feature-based inspection robot of claim 6, wherein: in the step S5, the method further includes the following steps:
S51:after the feature points on the image img1 are converted into GPS coordinates, the coordinates are corrected by offset compensation parametersThe calculation formula is as follows:
s52: calculating GPS coordinates of the feature points on img2 after correction by adopting the step S51
S53: establishing a cost function between characteristic point pairs:
wherein M is j For the j (j=1, 2,) k pairs of matched feature points, k is the number of pairs of feature points that match successfully, and e is the error sum.
CN202311143967.0A 2023-09-05 2023-09-05 Feature-based detection robot camera offset error compensation method Pending CN117333553A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311143967.0A CN117333553A (en) 2023-09-05 2023-09-05 Feature-based detection robot camera offset error compensation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311143967.0A CN117333553A (en) 2023-09-05 2023-09-05 Feature-based detection robot camera offset error compensation method

Publications (1)

Publication Number Publication Date
CN117333553A true CN117333553A (en) 2024-01-02

Family

ID=89281967

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311143967.0A Pending CN117333553A (en) 2023-09-05 2023-09-05 Feature-based detection robot camera offset error compensation method

Country Status (1)

Country Link
CN (1) CN117333553A (en)

Similar Documents

Publication Publication Date Title
CN106127697B (en) EO-1 hyperion geometric correction method is imaged in unmanned aerial vehicle onboard
EP1378790B1 (en) Method and device for correcting lens aberrations in a stereo camera system with zoom
CN100583151C (en) Double-camera calibrating method in three-dimensional scanning system
CN109685858B (en) Monocular camera online calibration method
CN110595476B (en) Unmanned aerial vehicle landing navigation method and device based on GPS and image visual fusion
CN112270320B (en) Power transmission line tower coordinate calibration method based on satellite image correction
CN113538595B (en) Method for improving geometric precision of remote sensing stereo image by using laser height measurement data in auxiliary manner
CN103822615A (en) Unmanned aerial vehicle ground target real-time positioning method with automatic extraction and gathering of multiple control points
CN113610060B (en) Structure crack sub-pixel detection method
CN113313769B (en) Seamless geometric calibration method between optical satellite multi-area array sensor chips
CN107330927A (en) Airborne visible images localization method
CN105205806A (en) Machine vision based precision compensation method
CN117391936A (en) Line-scan airport map splicing method based on line alignment
CN114494039A (en) A method for geometric correction of underwater hyperspectral push-broom images
CN103776426A (en) Geometric correction method for rotary platform farmland image
CN113255740B (en) Multi-source remote sensing image adjustment positioning accuracy analysis method
CN117333553A (en) Feature-based detection robot camera offset error compensation method
CN114092534A (en) Hyperspectral image and lidar data registration method and registration system
CN117391941A (en) Multi-camera line scanning image splicing method based on tunnel
CN115546266B (en) Multi-strip airborne laser point cloud registration method based on local normal correlation
CN112258585A (en) Calibration field design and image processing method for image distortion partition solution
CN110490830A (en) A kind of agricultural remote sensing method for correcting image and system
CN114581346B (en) A multispectral image fusion method for urban low-altitude remote sensing monitoring targets
EP4205536B1 (en) Method for measuring dimensions of a plant
CN116152325A (en) Road traffic high slope stability monitoring method based on monocular video

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination