CN115760984A - Non-cooperative target pose measurement method based on monocular vision by cubic star - Google Patents
Non-cooperative target pose measurement method based on monocular vision by cubic star Download PDFInfo
- Publication number
- CN115760984A CN115760984A CN202211470026.3A CN202211470026A CN115760984A CN 115760984 A CN115760984 A CN 115760984A CN 202211470026 A CN202211470026 A CN 202211470026A CN 115760984 A CN115760984 A CN 115760984A
- Authority
- CN
- China
- Prior art keywords
- image
- target
- target star
- detected
- template
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000691 measurement method Methods 0.000 title abstract description 5
- 238000012545 processing Methods 0.000 claims abstract description 18
- 239000011159 matrix material Substances 0.000 claims description 27
- 238000000034 method Methods 0.000 claims description 20
- 238000013519 translation Methods 0.000 claims description 10
- FPIGOBKNDYAZTP-UHFFFAOYSA-N 1,2-epoxy-3-(4-nitrophenoxy)propane Chemical compound C1=CC([N+](=O)[O-])=CC=C1OCC1OC1 FPIGOBKNDYAZTP-UHFFFAOYSA-N 0.000 claims description 7
- 238000003708 edge detection Methods 0.000 claims description 6
- 238000005520 cutting process Methods 0.000 claims description 2
- 238000000354 decomposition reaction Methods 0.000 claims description 2
- 238000003384 imaging method Methods 0.000 claims description 2
- 230000011218 segmentation Effects 0.000 claims description 2
- 238000012216 screening Methods 0.000 abstract description 2
- 238000000605 extraction Methods 0.000 description 3
- 238000005286 illumination Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000004140 cleaning Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000004744 fabric Substances 0.000 description 1
- 230000031700 light absorption Effects 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000012827 research and development Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Image Analysis (AREA)
Abstract
The invention discloses a non-cooperative target pose measurement method based on monocular vision for a cube, and relates to the technical field of satellites. The pose measurement method comprises the steps of establishing a three-dimensional image and a feature point template library of a target star; acquiring a real-time image of a target satellite, and matching the real-time image with a template image in a template library to obtain a template image corresponding to an image to be detected; matching the profile image rotation of the image to be detected with the profile image of the template image to determine the rotation angle of the image to be detected relative to the template image; performing gray level processing and threshold processing on an image to be detected to extract an edge contour of a target star, and obtaining a target star feature point according to the edge contour; and screening the corresponding relation of the characteristic point sequence in the template library according to the rotation angle, and obtaining the pose information of the target star relative to the camera through a pose solving algorithm by combining the three-dimensional coordinates of the characteristic points and the image characteristic points.
Description
Technical Field
The invention relates to the technical field of satellites, in particular to a method for measuring a non-cooperative target pose of a cubic satellite based on monocular vision.
Background
In recent years, based on the advantages of short development cycle, low manufacturing cost, low research and development cost and the like of the cubic satellite, more and more scientific research institutions and commercial companies shift attention to the cubic satellite, and besides scientific research and teaching and verification of electronic products, the cubic satellite is also applied to a series of on-orbit services, such as formation flight of the cubic satellite, maintenance and refueling of space vehicles, cleaning of space garbage and the like. And the series of on-orbit services are all independent of a cubic star vision-based navigation technology, so that a method for measuring the known non-cooperative target pose of a model aiming at monocular vision is provided.
Existing systems for satellite vision navigation are classified into monocular vision systems, binocular vision systems, and multi-view vision systems. In the aerospace field with strict requirements, monocular vision measurement is suitable for a cubic satellite platform due to the advantages of non-contact, low cost, high speed, small required space, flexible use and the like. In the prior art, a multi-choice binocular camera is used for measuring the pose of a target, and a binocular system is difficult to realize on a cube satellite platform. By looking up various data, no related patent for measuring the pose of the known non-cooperative target of the model for the cuboidal-star monocular visual navigation exists in China at present.
Disclosure of Invention
The invention aims to provide a non-cooperative target pose measuring method based on monocular vision for a cube star, which solves the problem of stably and efficiently acquiring the relative pose information of a target spacecraft during an on-orbit task of the cube star.
The technical solution for realizing the invention is as follows: a method for measuring the position and posture of a cubic star non-cooperative target based on monocular vision comprises the following steps:
step 1, collecting target star images from different angles, inputting three-dimensional coordinate values of target star feature points in a world coordinate system when a three-dimensional model of a target star is known, establishing a feature point sequence and establishing a template library in which the target star images and the feature points are in one-to-one correspondence. And (5) transferring to the step 2.
And 2, acquiring a real-time image of the target star as an image to be detected by using one camera, matching the image to be detected with the image of the target star in the template library, calculating the similarity of the images, calling the image of the target star with the highest similarity with the image to be detected as a template image, and turning to the step 3.
And 3, respectively carrying out edge detection on the image to be detected and the template image, correspondingly obtaining the profile image of the image to be detected and the profile image of the template image, rotating the profile image of the image to be detected, simultaneously matching the profile image of the template image, calculating rotation similarity, determining the rotation angle of the image to be detected relative to the template image according to the rotation similarity, and turning to the step 4.
And 4, performing gray processing, closed operation and threshold processing on the image to be detected to obtain an appearance image of the target star separated from the background, extracting a complete edge profile in the appearance image of the target star, obtaining target star feature points according to the edge profile, determining the corresponding relation between the target star feature points and the feature point sequence in the template library according to the rotation angle of the image to be detected relative to the template image, and turning to the step 5.
And 5, obtaining the pose information of the target star relative to the camera by using the corresponding relation between the target star feature points and the feature point sequence in the template library, the three-dimensional coordinates of the target star feature points and the target star feature points through an optimized EPNP pose solving algorithm, and optimizing and correcting the pose information.
Compared with the prior art, the invention has the remarkable advantages that:
(1) The pose solving method based on the feature points has the advantages of small calculated amount, high calculating efficiency and capability of obtaining real-time pose information.
(2) The invention is based on monocular vision, occupies small space on the satellite, has low power consumption requirement and is more suitable for a cubic satellite platform.
(3) The extraction of the feature points is based on the whole contour of the target star instead of a certain feature part, the extraction of the feature points is less influenced by illumination, and the robustness of the extraction of the feature points is higher.
Drawings
FIG. 1 is a flow chart of a measuring method of a non-cooperative target pose of a cube star based on monocular vision.
Figure 2 is a front view of a target star.
Detailed Description
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. It is to be understood that the drawings in the following description are merely exemplary of the disclosure, and that other drawings may be derived from those drawings by one of ordinary skill in the art without the exercise of inventive faculty.
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
With reference to fig. 1, a method for measuring a non-cooperative target pose of a cube star based on monocular vision comprises the following steps:
step 1, collecting target star images from different angles, inputting three-dimensional coordinate values of target star feature points in a world coordinate system, establishing a feature point sequence and establishing a template library in which the target star images and the feature points correspond to each other one by one, wherein the three-dimensional model of the target star is known, and the method specifically comprises the following steps:
the method comprises the steps of collecting three-dimensional coordinates of feature points of a target star, marking serial numbers, collecting images of the target star from angles at the same distance, processing the images of the target star, extracting the feature points of the images, and marking the serial numbers of the three-dimensional feature points corresponding to the upper, lower, left and right image feature points in a clockwise sequence.
Specifically, three-dimensional feature points of the target appearance are extracted and marked in sequence according to the three-dimensional model of the target star; and (3) establishing a coordinate system by taking the front face image (after the sailboard is unfolded) of the target star as an xy face (as shown in figure 2) and the front center of the image of the target star as an origin to obtain a three-dimensional coordinate value of each characteristic point, and corresponding the serial number of the characteristic point to the three-dimensional coordinate value thereof one by one to obtain a template library.
And 2, acquiring a real-time image of the target star as an image to be detected by using one camera, matching the image to be detected with the image of the target star in the template library, calculating the similarity of the images, and calling the image of the target star with the highest similarity with the image to be detected as a template image.
Specifically, a camera and a target star are fixed on a six-degree-of-freedom experimental platform, the positions of the camera and the target star are adjusted to ensure that the center of the image of the principal point of the camera and the front face of the target star are in the same horizontal line, meanwhile, the imaging plane of the camera is parallel to the front face of the target star, three-axis rotation and offset are carried out on the target star, and a real-time image is collected to serve as an image to be detected.
And carrying out feature point matching on the images to be detected and the target star images in the template library one by one, calculating the image similarity k, wherein the target star image with the maximum similarity k to the images to be detected is the template image corresponding to the images to be detected.
Wherein, P m The number of characteristic points P matched with the target star image in the image to be detected c And counting the number of all the characteristic points detected in the image to be detected.
And 3, respectively carrying out edge detection on the image to be detected and the template image, correspondingly obtaining the profile image of the image to be detected and the profile image of the template image, rotating the profile image of the image to be detected, simultaneously matching the profile image of the template image, calculating rotation similarity, and determining the rotation angle of the image to be detected relative to the template image according to the rotation similarity.
Specifically, edge detection processing is respectively carried out on the image to be detected and the template image through a Canny operator, the outline image of the image to be detected and the outline image of the template image are correspondingly obtained, the outline image of the image to be detected and the outline image of the template image are segmented through a pyramid segmentation method, and the outline image of the simple version of the image to be detected and the outline image of the template image are obtained.
The method comprises the steps of cutting a simple edition profile image of a template image into a circle with a target star centered, taking the current position of the simple edition profile image of the template image as a 0-degree starting point, increasing 10 degrees for each rotation angle from 0 degrees as the starting point, rotating the circular image until the circular image rotates 360 degrees, carrying out similarity matching on the profile image of the rotated template image and the profile image of an image to be detected in a normalized square difference mode to determine the approximate range of the optimal rotation angle, then rotating the profile image of the template image in a +/-5-degree area of the optimal rotation angle by taking 1 degree as the accuracy, then comparing the similarity of the images in the normalized square difference mode, and taking the rotation angle corresponding to the profile image of the template image with the lowest square difference value as the rotation angle.
And 4, performing gray processing, closed operation and threshold processing on the image to be detected to obtain an appearance image of the target star separated from the background, extracting a complete edge profile in the appearance image of the target star, obtaining target star feature points according to the edge profile, and determining the corresponding relation between the target star feature points and the feature point sequence in the template library according to the rotation angle of the image to be detected relative to the template image.
Specifically, gray processing is carried out on an image to be detected, user-defined threshold processing is carried out, as a space illumination environment is simulated through light absorption black cloth, a binary image with a black background and a white target star is obtained, all outlines of the white target star in the binary image are extracted, the most complete envelope outline of the target star is obtained through area screening, when the rotation angle is approximate to 0 degrees, 90 degrees, 180 degrees and 270 degrees, pixel point coordinate values (x, y) on the outline are traversed, the maximum value and the minimum value of x + y and x-y of each pixel point are calculated, namely four feature extreme points on the periphery of the unfolded solar array are obtained and are read in a clockwise sequence, and meanwhile, the obtaining mode of the feature pole of the satellite radome of the fifth feature point is determined according to the rotation angle (when the rotation angle is 0 degrees, the feature point corresponding to the minimum value of the longitudinal coordinate y is the coordinate of the fifth feature point); when the rotation angle of the target satellite is at other angles, traversing pixel point coordinate values (x, y) on the outline, calculating the maximum value and the minimum value of the abscissa x and the ordinate y of each pixel point, obtaining four characteristic points, reading the four characteristic points in a clockwise sequence, and determining the solving mode of the characteristic pole of the fifth characteristic point satellite radome according to the rotation angle (when the rotation angle is (0-90 degrees), the characteristic point corresponding to the maximum value of x-y is the coordinate of the fifth characteristic point) and marking the serial numbers corresponding to the five characteristic points according to the sequence.
The specific flow of threshold processing is as follows:
firstly, generating a gray level histogram of an image to be measured, and solving each gray level value I
(I =0,1,2, … …, 255) corresponding to the number of pixels N I 。
Then calculating the average gray value I of the image to be measured a :
Calculating the average gray value I of the image to be measured a Ideal gray value T of image to be measured e Difference value I between e =I a -T e Subtracting I from the gray values of all pixel points of all the images to be detected e And obtaining gray values of all pixel points of the processed image to be detected.
The gray values of all pixel points of the processed image to be detected and the threshold value [ Thresh ] of the ideal image gray value min ,Thresh max ]Comparing, if the gray value I of the jth pixel point j Greater than the threshold maximum Thresh of the ideal image grey value max Then, thenSimilarly, if the gray value I of the jth pixel point j Threshold minimum less than ideal image gray scale valueThresh min Then, then
And 5, obtaining the pose information of the target star relative to the camera by using the corresponding relation between the target star feature points and the feature point sequence in the template library, the three-dimensional coordinates of the target star feature points and the target star feature points through an optimized EPNP pose solving algorithm, and optimizing and correcting the pose information.
Specifically, according to the rotation angle of the image to be detected relative to the template image, a feature point sequence and three-dimensional coordinate values of the feature point sequence in the template library are determined, pixel coordinate values of the feature point in the image to be detected correspond to the three-dimensional coordinate values of the feature point in the template library one by one, and a relative rotation matrix R and a relative translation vector t are determined through an optimized EPNP pose solving algorithm.
The traditional EPNP algorithm is based on four characteristic points, the four characteristic points can obtain a unique relative pose solution, but the coordinates of the characteristic points obtained according to an image may have errors, so the unique solution also has errors, the precision of the EPNP algorithm is not high in the actual process, and in order to obtain a more accurate calculation result, a more accurate relative rotation matrix R and a more accurate translation matrix t are obtained, and the calculated relative rotation matrix R and the calculated relative translation matrix t are optimized on the basis of an initial relative rotation matrix and a translation matrix.
S5.1, defining the state average value mu of 5 characteristic points A :
A i The three-dimensional position of the ith characteristic point in the template library in a world coordinate system is shown.
Calculating the average three-dimensional position mu of all the characteristic points of the image to be measured B :
B i And the three-dimensional position of the ith characteristic point in the image to be detected.
S5.2, defining a covariance matrix H:
s5.3, performing singular value decomposition on H
H=UΣV T
Wherein U and V are unitary matrices and Σ is a diagonal matrix.
S5.4, calculating a relative rotation matrix R and a relative translation matrix t:
R=VU T
t=-Rμ A +μ B
the above optimization method is effective not only for the case of 5 feature points, but also for the case of more than 5 feature points.
S5.5, if the determinant det (R) =1 of R, then R is the relative rotation matrix;
if det (R) = -1, then R is a reflection matrix, and the reflection matrix is corrected:
R=V·diag(1,1,-1)·U T
and (3) solving a relative translation matrix by using a rotation matrix:
t=μ A -R·μB
in summary, the invention provides a monocular vision-based non-cooperative target pose measurement method for a cube, which comprises the steps of establishing a template base based on a target star to be measured, acquiring an image of the target star as an image to be measured in real time through a camera, selecting a template image most similar to the image to be measured from the template base, rotating the outline image of the image to be measured to match with the outline image of the template image to determine the rotation angle of the outline image of the image to be measured relative to the outline image of the template image, extracting the edge outline of the target star and extracting feature points through gray scale and threshold processing, obtaining a translation vector and a rotation matrix through a pose solution algorithm by combining three-dimensional coordinate points in the template base, and optimizing and correcting the translation vector and the rotation matrix to obtain the required pose information of the target star relative to the camera.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (6)
1. A method for measuring the pose of a cubic star non-cooperative target based on monocular vision is characterized by comprising the following steps:
step 1, acquiring target star images from different angles, inputting three-dimensional coordinate values of target star feature points in a world coordinate system, establishing a feature point sequence and establishing a template library in which the target star images and the feature points correspond to each other one by one, wherein three-dimensional models of the target star are known; turning to the step 2;
step 2, acquiring a real-time image of the target star as an image to be detected by using one camera, matching the image to be detected with the image of the target star in a template library, calculating the similarity of the images, calling the image of the target star with the highest similarity with the image to be detected as a template image, and turning to step 3;
step 3, respectively carrying out edge detection on the image to be detected and the template image, correspondingly obtaining a profile image of the image to be detected and a profile image of the template image, rotating the profile image of the image to be detected, simultaneously matching the profile image of the template image, calculating rotation similarity, determining the rotation angle of the image to be detected relative to the template image according to the rotation similarity, and turning to step 4;
step 4, performing gray processing, closed operation and threshold processing on the image to be detected to obtain an appearance image of the target star separated from the background, extracting a complete edge contour in the appearance image of the target star, obtaining target star feature points according to the edge contour, determining the corresponding relation between the target star feature points and the feature point sequence in the template library according to the rotation angle of the image to be detected relative to the template image, and turning to step 5;
and 5, obtaining the pose information of the target star relative to the camera by using the corresponding relation between the target star feature points and the feature point sequence in the template library, the three-dimensional coordinates of the target star feature points and the target star feature points through an optimized EPNP pose solving algorithm, and optimizing and correcting the pose information.
2. The method for measuring the position and pose of a cubic star non-cooperative target based on monocular vision according to claim 1, wherein in step 1, images of a target star are collected from different angles, a three-dimensional model of the target star is known, three-dimensional coordinate values of feature points of the target star in a world coordinate system are input, a feature point sequence is established, and a template library of one-to-one correspondence between the images of the target star and the feature points is established, specifically as follows:
acquiring three-dimensional coordinates of the feature points of the target star, marking serial numbers, acquiring images of the target star from all angles at the same distance, processing the images of the target star, extracting the feature points of the images, and marking the serial numbers of the three-dimensional feature points corresponding to the upper, lower, left and right image feature points in a clockwise sequence;
extracting and sequentially marking three-dimensional feature points of the target appearance according to the three-dimensional model of the target star; and (3) establishing a coordinate system by taking the unfolded surface of the target satellite sailboard as the front and the center of the front of the image of the target satellite as the origin to obtain the three-dimensional coordinate value of each characteristic point, and corresponding the serial number of the characteristic point to the three-dimensional coordinate value thereof one by one to obtain a template library.
3. The method for measuring the position and orientation of a cubic star non-cooperative target based on monocular vision according to claim 2, wherein in step 2, a camera is used for acquiring a real-time image of the target star as an image to be measured, the image to be measured is matched with the image of the target star in the template library, the image similarity is calculated, and the image of the target star with the highest similarity to the image to be measured is called as a template image, specifically as follows:
fixing a camera and a target star on a six-degree-of-freedom experimental platform, adjusting the positions of the camera and the target star to ensure that a principal point of the camera and the center of a front image of the target star are in a horizontal line, simultaneously enabling an imaging plane of the camera to be parallel to the front of the target star, performing three-axis rotation and offset on the target star, and acquiring a real-time image as an image to be detected;
carrying out feature point matching on the images to be detected and target star images in the template library one by one, calculating the image similarity k, wherein the target star image with the maximum similarity k to the images to be detected is the template image corresponding to the images to be detected:
wherein, P m The number of characteristic points, P, matched with the target star image in the image to be measured c And counting the number of all the characteristic points detected in the image to be detected.
4. The method for measuring the position and pose of a cubic star non-cooperative target based on monocular vision according to claim 3, wherein in step 3, the image to be measured and the template image are respectively subjected to edge detection, the contour image of the image to be measured and the contour image of the template image are correspondingly obtained, the contour image of the image to be measured is rotated and simultaneously matched with the contour image of the template image, the rotation similarity is calculated, and the rotation angle of the image to be measured relative to the template image is determined according to the rotation similarity, which specifically comprises the following steps:
respectively carrying out edge detection processing on the image to be detected and the template image through a Canny operator, correspondingly obtaining a profile image of the image to be detected and a profile image of the template image, and segmenting the profile image of the image to be detected and the profile image of the template image through a pyramid segmentation method to obtain the profile image of the simple version of the image to be detected and the profile image of the template image;
cutting a simple edition outline image of a template image into a circle centered by a target star, taking the current position of the simple edition outline image of the template image as a 0-degree starting point, increasing 10 degrees for rotating the circle image from 0 degrees as the starting point until the circle image rotates 360 degrees, carrying out similarity matching on the rotated outline image of the template image and the outline image of an image to be detected in a normalized square difference mode to determine the approximate range of the optimal rotating angle, then rotating the outline image of the template image to be detected in a +/-5-degree area of the optimal rotating angle by taking 1 degree as accuracy, then comparing the similarity of the images in the normalized square difference mode, and taking the rotating angle corresponding to the outline image of the template image with the lowest square difference value as the rotating angle.
5. The method for measuring the non-cooperative target pose of a cube star based on monocular vision according to claim 4, wherein in step 4, the appearance image of the target star separated from the background is obtained by performing gray processing, closing operation and threshold processing on the image to be measured, the complete edge contour in the appearance image of the target star is extracted, the target star feature points are obtained according to the edge contour, and the corresponding relation between the target star feature points and the feature point sequence in the template library is determined according to the rotation angle of the image to be measured relative to the template image, which is specifically as follows:
firstly, a gray level histogram of an image to be measured is generated, and the pixel number N corresponding to each gray level I is calculated I ,I=0,1,2,……,255;
Then calculating the average gray value I of the image to be measured a :
Calculating the average gray value I of the image to be measured a Ideal gray value T of image to be measured e Difference value I between e =I a -T e Subtracting I from the gray values of all pixel points of all the images to be measured e Obtaining gray values of all pixel points of the processed image to be detected; the gray values of all pixel points of the processed image to be detected and the threshold value [ Thresh ] of the ideal image gray value min ,Thresh max ]Comparing, if the gray value I of the jth pixel point j Greater than the threshold maximum Thresh of the ideal image grey value max Then, thenSimilarly, if the gray value I of the jth pixel point j Is less thanThreshold minimum Thresh of ideal image gray values min Then, then
6. The method for measuring the position and pose of a cubic star non-cooperative target based on monocular vision according to claim 5, wherein in step 5, the position and pose information of the target star relative to the camera is obtained and optimized and corrected by using the corresponding relation between the target star feature point and the feature point sequence in the template library, the three-dimensional coordinates of the target star feature point and the target star feature point through the optimized EPNP position and pose solving algorithm, which is specifically as follows:
s5.1, defining the state average value mu of 5 characteristic points A :
A i The three-dimensional position of the ith characteristic point in the template library in a world coordinate system is determined;
calculating the average three-dimensional position mu of all the characteristic points of the image to be measured B :
B i The three-dimensional position of the ith characteristic point in the image to be detected is obtained;
s5.2, defining a covariance matrix H:
s5.3, performing singular value decomposition on H
H=UΣV T
U and V are unitary matrixes, and sigma is a diagonal matrix;
s5.4, calculating a relative rotation matrix R and a relative translation matrix t:
R=VU T
t=-Rμ A +μ B
s5.5, if the determinant det (R) =1 of R, then R is the relative rotation matrix;
if det (R) = -1, then R is a reflection matrix, and the reflection matrix is corrected:
R=V·diag(1,1,-1)·U T
and (3) solving a relative translation matrix by using a rotation matrix:
t=μ A -R·μB。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211470026.3A CN115760984B (en) | 2022-11-23 | 2022-11-23 | Non-cooperative target pose measurement method based on monocular vision for cube star |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211470026.3A CN115760984B (en) | 2022-11-23 | 2022-11-23 | Non-cooperative target pose measurement method based on monocular vision for cube star |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115760984A true CN115760984A (en) | 2023-03-07 |
CN115760984B CN115760984B (en) | 2024-09-10 |
Family
ID=85335711
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211470026.3A Active CN115760984B (en) | 2022-11-23 | 2022-11-23 | Non-cooperative target pose measurement method based on monocular vision for cube star |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115760984B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116681733A (en) * | 2023-08-03 | 2023-09-01 | 南京航空航天大学 | Near-distance real-time pose tracking method for space non-cooperative target |
Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005173128A (en) * | 2003-12-10 | 2005-06-30 | Hitachi Ltd | Contour shape extractor |
CN103644918A (en) * | 2013-12-02 | 2014-03-19 | 中国科学院空间科学与应用研究中心 | Method for performing positioning processing on lunar exploration data by satellite |
CN103745458A (en) * | 2013-12-26 | 2014-04-23 | 华中科技大学 | A robust method for estimating the rotation axis and mass center of a spatial target based on a binocular optical flow |
US20150235380A1 (en) * | 2012-11-19 | 2015-08-20 | Ihi Corporation | Three-dimensional object recognition device and three-dimensional object recognition method |
US20160189381A1 (en) * | 2014-10-27 | 2016-06-30 | Digimarc Corporation | Signal detection, recognition and tracking with feature vector transforms |
CN106558074A (en) * | 2015-09-18 | 2017-04-05 | 河北工业大学 | Coarse-fine combination matching algorithm in assemble of the satellite based on rotational transformation matrix |
CN108562274A (en) * | 2018-04-20 | 2018-09-21 | 南京邮电大学 | A kind of noncooperative target pose measuring method based on marker |
CN109708649A (en) * | 2018-12-07 | 2019-05-03 | 中国空间技术研究院 | A kind of attitude determination method and system of remote sensing satellite |
CN111063021A (en) * | 2019-11-21 | 2020-04-24 | 西北工业大学 | Method and device for establishing three-dimensional reconstruction model of space moving target |
US20200302247A1 (en) * | 2019-03-19 | 2020-09-24 | Ursa Space Systems Inc. | Systems and methods for angular feature extraction from satellite imagery |
CN111768447A (en) * | 2020-07-01 | 2020-10-13 | 哈工大机器人(合肥)国际创新研究院 | Monocular camera object pose estimation method and system based on template matching |
CN112066879A (en) * | 2020-09-11 | 2020-12-11 | 哈尔滨工业大学 | Air floatation motion simulator pose measuring device and method based on computer vision |
CN114295092A (en) * | 2021-12-29 | 2022-04-08 | 航天科工智能运筹与信息安全研究院(武汉)有限公司 | Satellite radiometer thermal deformation error compensation method based on quaternion scanning imaging model |
US20220134639A1 (en) * | 2019-06-12 | 2022-05-05 | Vadient Optics, Llc | Additive manufacture using composite material arranged within a mechanically robust matrix |
-
2022
- 2022-11-23 CN CN202211470026.3A patent/CN115760984B/en active Active
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005173128A (en) * | 2003-12-10 | 2005-06-30 | Hitachi Ltd | Contour shape extractor |
US20150235380A1 (en) * | 2012-11-19 | 2015-08-20 | Ihi Corporation | Three-dimensional object recognition device and three-dimensional object recognition method |
CN103644918A (en) * | 2013-12-02 | 2014-03-19 | 中国科学院空间科学与应用研究中心 | Method for performing positioning processing on lunar exploration data by satellite |
CN103745458A (en) * | 2013-12-26 | 2014-04-23 | 华中科技大学 | A robust method for estimating the rotation axis and mass center of a spatial target based on a binocular optical flow |
US20160189381A1 (en) * | 2014-10-27 | 2016-06-30 | Digimarc Corporation | Signal detection, recognition and tracking with feature vector transforms |
CN106558074A (en) * | 2015-09-18 | 2017-04-05 | 河北工业大学 | Coarse-fine combination matching algorithm in assemble of the satellite based on rotational transformation matrix |
CN108562274A (en) * | 2018-04-20 | 2018-09-21 | 南京邮电大学 | A kind of noncooperative target pose measuring method based on marker |
CN109708649A (en) * | 2018-12-07 | 2019-05-03 | 中国空间技术研究院 | A kind of attitude determination method and system of remote sensing satellite |
US20200302247A1 (en) * | 2019-03-19 | 2020-09-24 | Ursa Space Systems Inc. | Systems and methods for angular feature extraction from satellite imagery |
US20220134639A1 (en) * | 2019-06-12 | 2022-05-05 | Vadient Optics, Llc | Additive manufacture using composite material arranged within a mechanically robust matrix |
CN111063021A (en) * | 2019-11-21 | 2020-04-24 | 西北工业大学 | Method and device for establishing three-dimensional reconstruction model of space moving target |
CN111768447A (en) * | 2020-07-01 | 2020-10-13 | 哈工大机器人(合肥)国际创新研究院 | Monocular camera object pose estimation method and system based on template matching |
CN112066879A (en) * | 2020-09-11 | 2020-12-11 | 哈尔滨工业大学 | Air floatation motion simulator pose measuring device and method based on computer vision |
CN114295092A (en) * | 2021-12-29 | 2022-04-08 | 航天科工智能运筹与信息安全研究院(武汉)有限公司 | Satellite radiometer thermal deformation error compensation method based on quaternion scanning imaging model |
Non-Patent Citations (3)
Title |
---|
RONGHUA DU等: ""A_vision-based_relative_navigation_sensor_for_on-orbit_servicing_of_CubeSats"", 《 2021 7TH INTERNATIONAL CONFERENCE ON MECHANICAL ENGINEERING AND AUTOMATION SCIENCE (ICMEAS)》, 20 December 2021 (2021-12-20) * |
张小俊;张明路;白丰;孙凌宇;: "关于卫星机器人的目标特征点匹配研究", 计算机仿真, no. 05, 15 May 2016 (2016-05-15) * |
肖鹏;周志峰;: "非接触目标相对姿态测量方法研究", 计算机测量与控制, no. 04, 25 April 2019 (2019-04-25) * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116681733A (en) * | 2023-08-03 | 2023-09-01 | 南京航空航天大学 | Near-distance real-time pose tracking method for space non-cooperative target |
CN116681733B (en) * | 2023-08-03 | 2023-11-07 | 南京航空航天大学 | Near-distance real-time pose tracking method for space non-cooperative target |
Also Published As
Publication number | Publication date |
---|---|
CN115760984B (en) | 2024-09-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108052942B (en) | Visual image recognition method for aircraft flight attitude | |
CN108562274B (en) | Marker-based non-cooperative target pose measurement method | |
Zhang et al. | Vision-based pose estimation for textureless space objects by contour points matching | |
Kolomenkin et al. | Geometric voting algorithm for star trackers | |
CN111862201B (en) | Deep learning-based spatial non-cooperative target relative pose estimation method | |
CN107679537B (en) | A kind of texture-free spatial target posture algorithm for estimating based on profile point ORB characteristic matching | |
EP4081938A1 (en) | Systems and methods for pose detection and measurement | |
CN105279769B (en) | A kind of level particle filter tracking method for combining multiple features | |
CN110097584A (en) | The method for registering images of combining target detection and semantic segmentation | |
Urban et al. | Finding a good feature detector-descriptor combination for the 2D keypoint-based registration of TLS point clouds | |
Petit et al. | A robust model-based tracker combining geometrical and color edge information | |
CN113295171B (en) | Monocular vision-based attitude estimation method for rotating rigid body spacecraft | |
Li et al. | Vision-based target detection and positioning approach for underwater robots | |
CN112734844A (en) | Monocular 6D pose estimation method based on octahedron | |
Cao et al. | Detection method based on image enhancement and an improved faster R-CNN for failed satellite components | |
CN112364805B (en) | Rotary palm image detection method | |
CN108320310B (en) | Image sequence-based space target three-dimensional attitude estimation method | |
Harvard et al. | Spacecraft pose estimation from monocular images using neural network based keypoints and visibility maps | |
CN116844124A (en) | Three-dimensional object detection frame labeling method, three-dimensional object detection frame labeling device, electronic equipment and storage medium | |
CN115760984B (en) | Non-cooperative target pose measurement method based on monocular vision for cube star | |
CN104484647B (en) | A kind of high-resolution remote sensing image cloud height detection method | |
Wu et al. | Object Pose Estimation with Point Cloud Data for Robot Grasping | |
CN115131433B (en) | Non-cooperative target pose processing method and device and electronic equipment | |
CN109690555B (en) | Curvature-based face detector | |
CN112734843B (en) | Monocular 6D pose estimation method based on regular dodecahedron |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |