[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN115760984A - Non-cooperative target pose measurement method based on monocular vision by cubic star - Google Patents

Non-cooperative target pose measurement method based on monocular vision by cubic star Download PDF

Info

Publication number
CN115760984A
CN115760984A CN202211470026.3A CN202211470026A CN115760984A CN 115760984 A CN115760984 A CN 115760984A CN 202211470026 A CN202211470026 A CN 202211470026A CN 115760984 A CN115760984 A CN 115760984A
Authority
CN
China
Prior art keywords
image
target star
target
template
star
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211470026.3A
Other languages
Chinese (zh)
Other versions
CN115760984B (en
Inventor
廖文和
朱奕潼
张翔
杜荣华
范书珲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN202211470026.3A priority Critical patent/CN115760984B/en
Publication of CN115760984A publication Critical patent/CN115760984A/en
Application granted granted Critical
Publication of CN115760984B publication Critical patent/CN115760984B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Image Analysis (AREA)

Abstract

本发明公开了一种立方星基于单目视觉的非合作目标位姿测量方法,涉及卫星技术领域。所述位姿测量方法包括建立目标星的三维图像与特征点模板库;采集目标星实时图像,与模板库中模板图像进行匹配得到待测图像对应的模板图像;将待测图像的轮廓图像旋转与模板图像的轮廓图像进行匹配确定待测图像相对于模板图像的旋转角度;对待测图像进行灰度处理及阈值处理提取目标星的边缘轮廓,根据边缘轮廓获得目标星特征点;根据所述旋转角度筛选模板库中特征点序列的对应关系,并结合特征点的三维坐标及图像特征点通过位姿求解算法得到目标星相对于相机的位姿信息。

Figure 202211470026

The invention discloses a method for measuring the pose and attitude of a non-cooperative target based on monocular vision of a cube satellite, and relates to the technical field of satellites. The pose measurement method includes establishing a three-dimensional image of the target star and a template library of feature points; collecting a real-time image of the target star, matching it with the template image in the template library to obtain a template image corresponding to the image to be measured; rotating the contour image of the image to be measured Matching the contour image of the template image to determine the rotation angle of the image to be tested relative to the template image; performing grayscale processing and threshold processing on the image to be tested to extract the edge contour of the target star, and obtaining the feature points of the target star according to the edge contour; according to the rotation The corresponding relationship of the feature point sequence in the template library is screened by angle, and the pose information of the target star relative to the camera is obtained through the pose solving algorithm by combining the three-dimensional coordinates of the feature points and the image feature points.

Figure 202211470026

Description

一种立方星基于单目视觉的非合作目标位姿测量方法A CubeSat based non-cooperative target pose measurement method based on monocular vision

技术领域technical field

本发明涉及卫星技术领域,具体涉及一种立方星基于单目视觉的非合作目标位姿测量方法。The invention relates to the technical field of satellites, in particular to a non-cooperative target pose measurement method based on monocular vision for a CubeSat.

背景技术Background technique

近年来,基于立方星开发周期短,制造成本低,研发费用少等优势,越来越多的科研院所和商业公司将注意力转移到立方星上,除了科研教学和电子产品的验证之外,立方星也被运用在一系列的在轨服务上,例如立方星的编队飞行,空间飞行器的维修和加油以及空间垃圾的清理等。而这一系列的在轨服务都离不开立方星基于视觉的导航技术,由此提出了针对单目视觉的模型已知非合作目标位姿测量方法。In recent years, based on the short development cycle of CubeSat, low manufacturing cost, and less R&D costs, more and more research institutes and commercial companies have turned their attention to CubeSat, in addition to scientific research, teaching and verification of electronic products. , CubeSats are also used in a series of on-orbit services, such as the formation flight of CubeSats, the maintenance and refueling of space vehicles, and the cleaning of space junk. And this series of on-orbit services are inseparable from the vision-based navigation technology of CubeSat, so a model-known non-cooperative target pose measurement method for monocular vision is proposed.

现有的用于卫星视觉导航系统分为单目视觉系统、双目视觉系统和多目视觉系统。在要求严格的航天领域,单目视觉测量由于其非接触式、成本低、速度快、所需空间小、使用灵活等优点适用于立方星平台。现有技术多选择双目相机对目标进行位姿测量,双目系统难在立方星平台上实现。查阅各种资料,目前国内尚无用于立方星单目视觉导航的模型已知非合作目标位姿测量的相关专利。Existing satellite visual navigation systems are divided into monocular vision systems, binocular vision systems and multi-eye vision systems. In the demanding aerospace field, monocular vision measurement is suitable for CubeSat platforms due to its advantages of non-contact, low cost, fast speed, small space required, and flexible use. In the existing technology, binocular cameras are often used to measure the pose of the target, and the binocular system is difficult to implement on the CubeSat platform. After consulting various materials, there is currently no relevant patent in China for the pose measurement of known non-cooperative targets of the model used for cubesat monocular vision navigation.

发明内容Contents of the invention

本发明的目的在于提供一种立方星基于单目视觉的非合作目标位姿测量方法,解决了立方星在轨任务期间稳定高效获取目标航天器的相对位姿信息问题。The object of the present invention is to provide a non-cooperative target pose measurement method based on monocular vision for CubeSat, which solves the problem of obtaining the relative pose information of the target spacecraft stably and efficiently during the on-orbit mission of the CubeSat.

实现本发明的技术解决方案为:一种立方星基于单目视觉的非合作目标位姿测量方法,步骤如下:Achieving the technical solution of the present invention is: a kind of non-cooperative target pose measurement method of CubeSat based on monocular vision, the steps are as follows:

步骤1、从不同角度采集目标星图像,所述目标星的三维模型已知,输入目标星特征点在世界坐标系下的三维坐标值,建立特征点序列并建立目标星图像和特征点一一对应的模板库。转入步骤2。Step 1. Collect target star images from different angles. The 3D model of the target star is known. Input the 3D coordinate values of the target star feature points in the world coordinate system to establish a sequence of feature points and establish target star images and feature points one by one The corresponding template library. Go to step 2.

步骤2、利用一台相机采集目标星实时图像作为待测图像,将待测图像与模板库中的目标星图像进行匹配,计算图像相似度,将与待测图像相似度最高的目标星图像称作模板图像,转入步骤3。Step 2. Use a camera to collect the real-time image of the target star as the image to be tested, match the image to be tested with the target star image in the template library, calculate the image similarity, and call the target star image with the highest similarity to the image to be tested as Make a template image, go to step 3.

步骤3、对待测图像和模板图像分别进行边缘检测,对应获得待测图像的轮廓图像与模板图像的轮廓图像,将待测图像的轮廓图像进行旋转,同时与模板图像的轮廓图像进行匹配,计算旋转相似度,根据旋转相似度确定待测图像相对于模板图像的旋转角度,转入步骤4。Step 3. Perform edge detection on the image to be tested and the template image respectively, obtain the contour image of the image to be tested and the contour image of the template image correspondingly, rotate the contour image of the image to be tested, and match it with the contour image of the template image at the same time, and calculate Rotation similarity, determine the rotation angle of the image to be tested relative to the template image according to the rotation similarity, and turn to step 4.

步骤4、通过对待测图像进行灰度处理、闭运算及阈值处理获得与背景分离的目标星的外观图像,提取目标星外观图像中的完整边缘轮廓,根据上述边缘轮廓获得目标星特征点,根据待测图像相对于模板图像的旋转角度,确定目标星特征点与模板库中特征点序列的对应关系,转入步骤5。Step 4. Obtain the appearance image of the target star separated from the background by performing grayscale processing, closed operation and threshold processing on the image to be tested, extract the complete edge contour in the appearance image of the target star, and obtain the feature points of the target star according to the above edge contour. Determine the rotation angle of the image to be tested relative to the template image, determine the corresponding relationship between the target star feature point and the feature point sequence in the template library, and turn to step 5.

步骤5、利用目标星特征点与模板库中特征点序列的对应关系、目标星特征点的三维坐标及目标星特征点,通过优化后的EPNP位姿求解算法,得到目标星相对于相机的位姿信息并对其进行优化与校正。Step 5. Using the corresponding relationship between the target star feature point and the feature point sequence in the template library, the three-dimensional coordinates of the target star feature point and the target star feature point, the pose of the target star relative to the camera is obtained through the optimized EPNP pose solving algorithm information and optimize and correct it.

本发明与现有技术相比,其显著优点在于:Compared with the prior art, the present invention has significant advantages in that:

(1)本发明基于特征点进行位姿求解,计算量较小,计算效率更高,能够获得实时位姿信息。(1) The present invention solves the pose based on the feature points, the calculation amount is small, the calculation efficiency is higher, and the real-time pose information can be obtained.

(2)本发明基于单目视觉,所占星上空间小,功耗要求低,更适用于立方星平台。(2) The present invention is based on monocular vision, occupies less space on the star, and has low power consumption requirements, and is more suitable for CubeSat platforms.

(3)特征点的提取是基于目标星的整体轮廓而非某一特征部件,特征点的提取受光照影响较小,特征点提取的鲁棒性更高。(3) The extraction of feature points is based on the overall outline of the target star rather than a certain feature component. The extraction of feature points is less affected by illumination, and the robustness of feature point extraction is higher.

附图说明Description of drawings

图1为本发明立方星基于单目视觉的非合作目标位姿测量方法流程图。Fig. 1 is a flow chart of the non-cooperative target pose measurement method based on monocular vision of CubeSat according to the present invention.

图2为目标星的正面图。Figure 2 is the front view of the target star.

具体实施方式Detailed ways

此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本公开的实施例,并与说明书一起用于解释本公开的原理。显而易见地,下面描述中的附图仅仅是本公开的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description serve to explain the principles of the disclosure. Apparently, the drawings in the following description are only some embodiments of the present disclosure, and those skilled in the art can obtain other drawings according to these drawings without creative efforts.

下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The following will clearly and completely describe the technical solutions in the embodiments of the present invention with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only some, not all, embodiments of the present invention. Based on the embodiments of the present invention, all other embodiments obtained by persons of ordinary skill in the art without making creative efforts belong to the protection scope of the present invention.

结合图1,一种立方星基于单目视觉的非合作目标位姿测量方法,步骤如下:Combined with Figure 1, a non-cooperative target pose measurement method based on monocular vision for a cube star, the steps are as follows:

步骤1、从不同角度采集目标星图像,所述目标星的三维模型已知,输入目标星特征点在世界坐标系下的三维坐标值,建立特征点序列并建立目标星图像和特征点一一对应的模板库,具体如下:Step 1. Collect target star images from different angles. The 3D model of the target star is known. Input the 3D coordinate values of the target star feature points in the world coordinate system to establish a sequence of feature points and establish target star images and feature points one by one The corresponding template library is as follows:

采集目标星特征点的三维坐标,并标记序号,随后从同一距离的各个角度采集目标星图像,对目标星图像进行处理,提取图像特征点并按顺时针顺序标注上下左右四个图像特征点所对应的三维特征点序号。Collect the three-dimensional coordinates of the target star feature points, and mark the serial number, then collect the target star image from various angles at the same distance, process the target star image, extract the image feature points, and mark the four image feature points clockwise. Corresponding 3D feature point serial number.

具体的,根据目标星的三维模型提取并按顺序标记目标性外观的三维特征点;以目标星的正面图(帆板展开后)为xy面(如图2),目标星图像正面中心为原点建立坐标系,得到每个特征点的三维坐标值,将特征点的序号与其三维坐标值一一对应,获得模板库。Specifically, extract and mark the 3D feature points of the target appearance in order according to the 3D model of the target star; take the front view of the target star (after the sailboard is unfolded) as the xy plane (as shown in Figure 2), and the center of the front of the target star image as the origin A coordinate system is established to obtain the three-dimensional coordinate value of each feature point, and one-to-one correspondence between the serial number of the feature point and its three-dimensional coordinate value is obtained to obtain the template library.

步骤2、利用一台相机采集目标星实时图像作为待测图像,将待测图像与模板库中的目标星图像进行匹配,计算图像相似度,将与待测图像相似度最高的目标星图像称作模板图像。Step 2. Use a camera to collect the real-time image of the target star as the image to be tested, match the image to be tested with the target star image in the template library, calculate the image similarity, and call the target star image with the highest similarity to the image to be tested as as a template image.

具体的,将相机与目标星固定于六自由度实验平台,调整相机与目标星位置确保相机的主点与目标星正面图像中心在一条水平线,同时相机的成像平面与目标星的正面平行,对目标星进行三轴旋转与偏移,并采集实时图像作为待测图像。Specifically, the camera and the target star are fixed on the six-degree-of-freedom experimental platform, and the positions of the camera and the target star are adjusted to ensure that the principal point of the camera is on a horizontal line with the center of the front image of the target star, and the imaging plane of the camera is parallel to the front of the target star. The target star performs three-axis rotation and offset, and collects real-time images as the image to be tested.

将待测图像与模板库中的目标星图像一一进行特征点匹配,计算图像相似度κ,与待测图像相似度κ最大的目标星图像即为待测图像对应的模板图像。Match the feature points of the image to be tested with the target star image in the template library one by one, and calculate the image similarity κ, and the target star image with the largest similarity κ to the image to be tested is the template image corresponding to the image to be tested.

Figure BDA0003958154390000031
Figure BDA0003958154390000031

其中,Pm为待测图像中与目标星图像匹配到的特征点数,Pc为待测图像中检测到的所有特征点数。Among them, P m is the number of feature points matched with the target star image in the image to be tested, and P c is the number of all feature points detected in the image to be tested.

步骤3、对待测图像和模板图像分别进行边缘检测,对应获得待测图像的轮廓图像与模板图像的轮廓图像,将待测图像的轮廓图像进行旋转,同时与模板图像的轮廓图像进行匹配,计算旋转相似度,根据旋转相似度确定待测图像相对于模板图像的旋转角度。Step 3. Perform edge detection on the image to be tested and the template image respectively, obtain the contour image of the image to be tested and the contour image of the template image correspondingly, rotate the contour image of the image to be tested, and match it with the contour image of the template image at the same time, and calculate The rotation similarity is used to determine the rotation angle of the image to be tested relative to the template image according to the rotation similarity.

具体的,通过Canny算子分别对待测图像和模板图像进行边缘检测处理,对应获得待测图像的轮廓图像与模板图像的轮廓图像,通过金字塔分割法将待测图像的轮廓图像与模板图像的轮廓图像进行分割,获得简易版的待测图像的轮廓图像与模板图像的轮廓图像。Specifically, the Canny operator is used to perform edge detection processing on the image to be tested and the template image respectively, and the contour image of the image to be tested and the contour image of the template image are correspondingly obtained, and the contour image of the image to be tested and the contour of the template image are obtained by pyramid segmentation The image is segmented to obtain a simplified version of the outline image of the image to be tested and the outline image of the template image.

将模板图像的简易版轮廓图像裁剪成目标星居中的圆形,以模板图像的简易版轮廓图像的当前位置为0°起点,从0°为起点每次旋转角度增加10°对圆形图像进行旋转直到旋转360°,将旋转后的模板图像的轮廓图像和待测图像的轮廓图像通过归一化的平方差方式进行相似性匹配确定最佳旋转角度的大致范围,随后在最佳旋转角度的±5°区域内以1°为精确度对待模板图像的轮廓图像进行旋转,随后通过归一化平方差方式对比图像的相似性,平方差数值最低的模板图像的轮廓图像所对应的旋转角度即为所述旋转角度。Cut the simple version of the outline image of the template image into a circle centered on the target star, take the current position of the simple version of the outline image of the template image as the starting point of 0°, and increase the rotation angle of the circle image by 10° from 0° as the starting point. Rotate until it rotates 360°, and perform similarity matching on the contour image of the rotated template image and the contour image of the image to be tested by using a normalized square difference method to determine the approximate range of the optimal rotation angle, and then determine the approximate range of the optimal rotation angle. Rotate the contour image of the template image with an accuracy of 1° in the ±5° area, and then compare the similarity of the images through the normalized square difference method. The rotation angle corresponding to the contour image of the template image with the lowest value of the square difference is is the rotation angle.

步骤4、通过对待测图像进行灰度处理、闭运算及阈值处理获得与背景分离的目标星的外观图像,提取目标星外观图像中的完整边缘轮廓,根据上述边缘轮廓获得目标星特征点,根据待测图像相对于模板图像的旋转角度,确定目标星特征点与模板库中特征点序列的对应关系。Step 4. Obtain the appearance image of the target star separated from the background by performing grayscale processing, closed operation and threshold processing on the image to be tested, extract the complete edge contour in the appearance image of the target star, and obtain the feature points of the target star according to the above edge contour. The rotation angle of the image to be tested relative to the template image determines the corresponding relationship between the target star feature point and the feature point sequence in the template library.

具体的,对待测图像进行灰度处理,并进行自定义的阈值处理,由于通过吸光黑布模拟太空光照环境,得到背景为黑色、目标星为白色的二值图像,提取二值图像中白色目标星的所有轮廓,通过面积筛选得到目标星的最完整的包络轮廓,当所述旋转角度近似于0°、90°、180°和270°时,遍历轮廓上的像素点坐标值(x,y),计算每个像素点的x+y和x-y的极大值和极小值,即获得展开的太阳帆板的外围四个特征极值点并按顺时针顺序读取,同时根据旋转角度确定第五个特征点卫星天线罩的特征极点的求取方式(当旋转角度为0°时,纵坐标y的极小值所对应的特征点即为第五个特征点的坐标);当目标星旋转角度为其他角度时,遍历轮廓上的像素点坐标值(x,y),计算每个像素点的横坐标x和纵坐标y的极大值和极小值,得到四个特征点并按顺时针顺序读取,同时根据旋转角度确定第五个特征点卫星天线罩的特征极点的求取方式(当旋转角度为(0°~90°)时,x-y的极大值所对应的特征点即为第五个特征点的坐标)按照上述顺序对所述五个特征点标注其所对应的序号。Specifically, grayscale processing is performed on the image to be tested, and custom threshold value processing is performed. Since the space lighting environment is simulated through a light-absorbing black cloth, a binary image with a black background and a white target star is obtained, and the white target in the binary image is extracted. All the contours of the star, the most complete envelope contour of the target star is obtained through area screening, when the rotation angle is approximately 0°, 90°, 180° and 270°, the coordinate values of pixels on the contour (x, y), calculate the maximum and minimum values of x+y and x-y for each pixel, that is, obtain the four characteristic extreme points on the periphery of the unfolded solar panel and read them in clockwise order, and at the same time according to the rotation angle Determine the way to obtain the characteristic pole of the fifth characteristic point satellite radome (when the rotation angle is 0°, the characteristic point corresponding to the minimum value of the ordinate y is the coordinate of the fifth characteristic point); when the target When the rotation angle of the star is other angles, traverse the pixel point coordinates (x, y) on the outline, calculate the maximum and minimum values of the abscissa x and ordinate y of each pixel, and obtain four feature points and Read in clockwise order, and at the same time determine the way to obtain the characteristic pole of the fifth feature point satellite radome according to the rotation angle (when the rotation angle is (0°~90°), the feature corresponding to the maximum value of x-y point is the coordinates of the fifth feature point) mark the corresponding serial numbers of the five feature points according to the above sequence.

其中阈值处理的具体流程如下:The specific process of threshold processing is as follows:

首先生成待测图像的灰度直方图,求出每一个灰度值IFirst generate the grayscale histogram of the image to be tested, and calculate each grayscale value I

(I=0,1,2,……,255)对应的像素数NI(I=0, 1, 2, . . . , 255) corresponds to the number of pixels N I .

随后计算待测图像的平均灰度值IaThen calculate the average gray value I a of the image to be tested:

Figure BDA0003958154390000051
Figure BDA0003958154390000051

计算待测图像的平均灰度值Ia与待测图像的理想灰度值Te之间的差值Ie=Ia-Te,将所有的待测图像的所有像素点的灰度值均减去Ie,得到处理后的待测图像的所有像素点的灰度值。Calculate the difference between the average grayscale value I a of the image to be tested and the ideal grayscale value Te of the image to be tested I e = I a - T e , the grayscale values of all pixels in all the images to be tested Ie is subtracted from both to obtain the gray value of all pixels of the processed image to be tested.

将处理后的待测图像的所有像素点的灰度值与理想的图像灰度值的阈值[Threshmin,Threshmax]进行对比,若第j个像素点的灰度值Ij大于理想中的图像灰度值的阈值最大值Threshmax,则

Figure BDA0003958154390000052
同理若第j个像素点的灰度值Ij小于理想的图像灰度值的阈值最小值Threshmin,则
Figure BDA0003958154390000053
Compare the gray value of all pixels of the processed image to be tested with the ideal image gray value threshold [Thresh min , Thresh max ], if the gray value I j of the jth pixel is greater than the ideal The threshold value Thresh max of the image gray value, then
Figure BDA0003958154390000052
Similarly, if the gray value I j of the jth pixel is less than the ideal image gray value threshold minimum value Thresh min , then
Figure BDA0003958154390000053

步骤5、利用目标星特征点与模板库中特征点序列的对应关系、目标星特征点的三维坐标及目标星特征点,通过优化后的EPNP位姿求解算法,得到目标星相对于相机的位姿信息并对其进行优化与校正。Step 5. Using the corresponding relationship between the target star feature point and the feature point sequence in the template library, the three-dimensional coordinates of the target star feature point and the target star feature point, the pose of the target star relative to the camera is obtained through the optimized EPNP pose solving algorithm information and optimize and correct it.

具体的,根据待测图像相对于模板图像的旋转角度,确定模板库中特征点序列及其三维坐标值,将待测图像中特征点的像素坐标值与模板库中特征点的三维坐标值一一对应,通过优化后的EPNP位姿求解算法确定相对旋转矩阵R及相对平移向量t。Specifically, according to the rotation angle of the image to be tested relative to the template image, the feature point sequence and its three-dimensional coordinate value in the template library are determined, and the pixel coordinate value of the feature point in the image to be tested is compared with the three-dimensional coordinate value of the feature point in the template library. One-to-one correspondence, the relative rotation matrix R and the relative translation vector t are determined through the optimized EPNP pose solving algorithm.

传统的EPNP算法是基于四个特征点的,四个特征点可以得到唯一的相对位姿解,但根据图像得到的特征点坐标可能存在误差,则唯一解也存在误差,在实际过程中EPNP算法的精度不高,为了得到更准确的计算结果,获得更准确的相对旋转矩阵R和平移矩阵t,在初始相对旋转矩阵和平移矩阵的基础上,对计算出来的相对旋转矩阵R和相对平移矩阵t进行了优化。The traditional EPNP algorithm is based on four feature points, and the four feature points can obtain a unique relative pose solution, but there may be errors in the coordinates of the feature points obtained from the image, and the unique solution also has errors. In the actual process, the EPNP algorithm The accuracy is not high, in order to obtain more accurate calculation results, to obtain a more accurate relative rotation matrix R and translation matrix t, on the basis of the initial relative rotation matrix and translation matrix, the calculated relative rotation matrix R and relative translation matrix t is optimized.

S5.1,定义5个特征点的状态平均值μAS5.1, define the state average μ A of 5 feature points:

Figure BDA0003958154390000061
Figure BDA0003958154390000061

Ai为模板库中第i个特征点在世界坐标系下的三维位置。A i is the three-dimensional position of the i-th feature point in the template library in the world coordinate system.

计算待测图像所有特征点的平均三维位置μBCalculate the average three-dimensional position μ B of all feature points of the image to be tested:

Figure BDA0003958154390000062
Figure BDA0003958154390000062

Bi为待测图像中第i个特征点的三维位置。B i is the three-dimensional position of the i-th feature point in the image to be tested.

S5.2,定义协方差矩阵H:S5.2, define the covariance matrix H:

Figure BDA0003958154390000063
Figure BDA0003958154390000063

S5.3,对H进行奇异值分解S5.3, perform singular value decomposition on H

H=UΣVT H= UΣVT

其中U和V为酉矩阵,Σ为一个对角矩阵。Where U and V are unitary matrices, and Σ is a diagonal matrix.

S5.4,计算相对旋转矩阵R和相对平移矩阵t:S5.4, calculate the relative rotation matrix R and the relative translation matrix t:

R=VUT R = V U T

t=-RμAB t=-Rμ AB

上面的优化方法,不仅对5个特征点的情况有效,对于超过5个特征点的情况仍然适用。The above optimization method is not only effective for the case of 5 feature points, but also applicable for the case of more than 5 feature points.

S5.5,若R的行列式det(R)=1,则R即为相对旋转矩阵;S5.5, if the determinant det(R)=1 of R, then R is the relative rotation matrix;

若det(R)=-1,则R为反射矩阵,对反射矩阵进行校正:If det(R)=-1, then R is the reflection matrix, and the reflection matrix is corrected:

R=V·diag(1,1,-1)·UT R=V·diag(1,1,-1)· UT

利用旋转矩阵求相对平移矩阵:Use the rotation matrix to find the relative translation matrix:

t=μA-R·μBt= μA -R·μB

综上所述,本发明提供一种立方星基于单目视觉的非合作目标位姿测量方法,先基于待测量目标星建立模板库,通过相机实时采集目标星图像作为待测图像,从模板库中选出与待测图像最相似的模板图像,旋转待测图像的轮廓图像与模板图像的轮廓图像匹配确定待测图像的轮廓图像相对于模板图像的轮廓图像的旋转角度,通过灰度和阈值处理提取目标星边缘轮廓并提取特征点,结合模板库中三维坐标点通过位姿求解算法得到平移向量和旋转矩阵,并对其进行优化与校正即得到要求的目标星相对于相机的位姿信息。In summary, the present invention provides a non-cooperative target pose measurement method based on monocular vision for a cube star. First, a template library is established based on the target star to be measured, and the image of the target star is collected in real time by the camera as the image to be measured. From the template library Select the template image that is most similar to the image to be tested, and rotate the contour image of the image to be tested to match the contour image of the template image to determine the rotation angle of the contour image of the image to be tested relative to the contour image of the template image, through grayscale and threshold Process and extract the edge outline of the target star and extract the feature points, combine the three-dimensional coordinate points in the template library to obtain the translation vector and rotation matrix through the pose calculation algorithm, and optimize and correct them to obtain the required pose information of the target star relative to the camera.

以上所述仅为本发明的优选实施例而已,并不用于限制本发明,对于本领域的技术人员来说,本发明可以有各种更改和变化。凡在本发明的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。The above descriptions are only preferred embodiments of the present invention, and are not intended to limit the present invention. For those skilled in the art, the present invention may have various modifications and changes. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention shall be included in the protection scope of the present invention.

Claims (6)

1. A method for measuring the pose of a cubic star non-cooperative target based on monocular vision is characterized by comprising the following steps:
step 1, acquiring target star images from different angles, inputting three-dimensional coordinate values of target star feature points in a world coordinate system, establishing a feature point sequence and establishing a template library in which the target star images and the feature points correspond to each other one by one, wherein three-dimensional models of the target star are known; turning to the step 2;
step 2, acquiring a real-time image of the target star as an image to be detected by using one camera, matching the image to be detected with the image of the target star in a template library, calculating the similarity of the images, calling the image of the target star with the highest similarity with the image to be detected as a template image, and turning to step 3;
step 3, respectively carrying out edge detection on the image to be detected and the template image, correspondingly obtaining a profile image of the image to be detected and a profile image of the template image, rotating the profile image of the image to be detected, simultaneously matching the profile image of the template image, calculating rotation similarity, determining the rotation angle of the image to be detected relative to the template image according to the rotation similarity, and turning to step 4;
step 4, performing gray processing, closed operation and threshold processing on the image to be detected to obtain an appearance image of the target star separated from the background, extracting a complete edge contour in the appearance image of the target star, obtaining target star feature points according to the edge contour, determining the corresponding relation between the target star feature points and the feature point sequence in the template library according to the rotation angle of the image to be detected relative to the template image, and turning to step 5;
and 5, obtaining the pose information of the target star relative to the camera by using the corresponding relation between the target star feature points and the feature point sequence in the template library, the three-dimensional coordinates of the target star feature points and the target star feature points through an optimized EPNP pose solving algorithm, and optimizing and correcting the pose information.
2. The method for measuring the position and pose of a cubic star non-cooperative target based on monocular vision according to claim 1, wherein in step 1, images of a target star are collected from different angles, a three-dimensional model of the target star is known, three-dimensional coordinate values of feature points of the target star in a world coordinate system are input, a feature point sequence is established, and a template library of one-to-one correspondence between the images of the target star and the feature points is established, specifically as follows:
acquiring three-dimensional coordinates of the feature points of the target star, marking serial numbers, acquiring images of the target star from all angles at the same distance, processing the images of the target star, extracting the feature points of the images, and marking the serial numbers of the three-dimensional feature points corresponding to the upper, lower, left and right image feature points in a clockwise sequence;
extracting and sequentially marking three-dimensional feature points of the target appearance according to the three-dimensional model of the target star; and (3) establishing a coordinate system by taking the unfolded surface of the target satellite sailboard as the front and the center of the front of the image of the target satellite as the origin to obtain the three-dimensional coordinate value of each characteristic point, and corresponding the serial number of the characteristic point to the three-dimensional coordinate value thereof one by one to obtain a template library.
3. The method for measuring the position and orientation of a cubic star non-cooperative target based on monocular vision according to claim 2, wherein in step 2, a camera is used for acquiring a real-time image of the target star as an image to be measured, the image to be measured is matched with the image of the target star in the template library, the image similarity is calculated, and the image of the target star with the highest similarity to the image to be measured is called as a template image, specifically as follows:
fixing a camera and a target star on a six-degree-of-freedom experimental platform, adjusting the positions of the camera and the target star to ensure that a principal point of the camera and the center of a front image of the target star are in a horizontal line, simultaneously enabling an imaging plane of the camera to be parallel to the front of the target star, performing three-axis rotation and offset on the target star, and acquiring a real-time image as an image to be detected;
carrying out feature point matching on the images to be detected and target star images in the template library one by one, calculating the image similarity k, wherein the target star image with the maximum similarity k to the images to be detected is the template image corresponding to the images to be detected:
Figure FDA0003958154380000021
wherein, P m The number of characteristic points, P, matched with the target star image in the image to be measured c And counting the number of all the characteristic points detected in the image to be detected.
4. The method for measuring the position and pose of a cubic star non-cooperative target based on monocular vision according to claim 3, wherein in step 3, the image to be measured and the template image are respectively subjected to edge detection, the contour image of the image to be measured and the contour image of the template image are correspondingly obtained, the contour image of the image to be measured is rotated and simultaneously matched with the contour image of the template image, the rotation similarity is calculated, and the rotation angle of the image to be measured relative to the template image is determined according to the rotation similarity, which specifically comprises the following steps:
respectively carrying out edge detection processing on the image to be detected and the template image through a Canny operator, correspondingly obtaining a profile image of the image to be detected and a profile image of the template image, and segmenting the profile image of the image to be detected and the profile image of the template image through a pyramid segmentation method to obtain the profile image of the simple version of the image to be detected and the profile image of the template image;
cutting a simple edition outline image of a template image into a circle centered by a target star, taking the current position of the simple edition outline image of the template image as a 0-degree starting point, increasing 10 degrees for rotating the circle image from 0 degrees as the starting point until the circle image rotates 360 degrees, carrying out similarity matching on the rotated outline image of the template image and the outline image of an image to be detected in a normalized square difference mode to determine the approximate range of the optimal rotating angle, then rotating the outline image of the template image to be detected in a +/-5-degree area of the optimal rotating angle by taking 1 degree as accuracy, then comparing the similarity of the images in the normalized square difference mode, and taking the rotating angle corresponding to the outline image of the template image with the lowest square difference value as the rotating angle.
5. The method for measuring the non-cooperative target pose of a cube star based on monocular vision according to claim 4, wherein in step 4, the appearance image of the target star separated from the background is obtained by performing gray processing, closing operation and threshold processing on the image to be measured, the complete edge contour in the appearance image of the target star is extracted, the target star feature points are obtained according to the edge contour, and the corresponding relation between the target star feature points and the feature point sequence in the template library is determined according to the rotation angle of the image to be measured relative to the template image, which is specifically as follows:
firstly, a gray level histogram of an image to be measured is generated, and the pixel number N corresponding to each gray level I is calculated I ,I=0,1,2,……,255;
Then calculating the average gray value I of the image to be measured a
Figure FDA0003958154380000031
Calculating the average gray value I of the image to be measured a Ideal gray value T of image to be measured e Difference value I between e =I a -T e Subtracting I from the gray values of all pixel points of all the images to be measured e Obtaining gray values of all pixel points of the processed image to be detected; the gray values of all pixel points of the processed image to be detected and the threshold value [ Thresh ] of the ideal image gray value min ,Thresh max ]Comparing, if the gray value I of the jth pixel point j Greater than the threshold maximum Thresh of the ideal image grey value max Then, then
Figure FDA0003958154380000032
Similarly, if the gray value I of the jth pixel point j Is less thanThreshold minimum Thresh of ideal image gray values min Then, then
Figure FDA0003958154380000033
6. The method for measuring the position and pose of a cubic star non-cooperative target based on monocular vision according to claim 5, wherein in step 5, the position and pose information of the target star relative to the camera is obtained and optimized and corrected by using the corresponding relation between the target star feature point and the feature point sequence in the template library, the three-dimensional coordinates of the target star feature point and the target star feature point through the optimized EPNP position and pose solving algorithm, which is specifically as follows:
s5.1, defining the state average value mu of 5 characteristic points A
Figure FDA0003958154380000034
A i The three-dimensional position of the ith characteristic point in the template library in a world coordinate system is determined;
calculating the average three-dimensional position mu of all the characteristic points of the image to be measured B
Figure FDA0003958154380000041
B i The three-dimensional position of the ith characteristic point in the image to be detected is obtained;
s5.2, defining a covariance matrix H:
Figure FDA0003958154380000042
s5.3, performing singular value decomposition on H
H=UΣV T
U and V are unitary matrixes, and sigma is a diagonal matrix;
s5.4, calculating a relative rotation matrix R and a relative translation matrix t:
R=VU T
t=-Rμ AB
s5.5, if the determinant det (R) =1 of R, then R is the relative rotation matrix;
if det (R) = -1, then R is a reflection matrix, and the reflection matrix is corrected:
R=V·diag(1,1,-1)·U T
and (3) solving a relative translation matrix by using a rotation matrix:
t=μ A -R·μB。
CN202211470026.3A 2022-11-23 2022-11-23 Non-cooperative target pose measurement method based on monocular vision for cube star Active CN115760984B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211470026.3A CN115760984B (en) 2022-11-23 2022-11-23 Non-cooperative target pose measurement method based on monocular vision for cube star

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211470026.3A CN115760984B (en) 2022-11-23 2022-11-23 Non-cooperative target pose measurement method based on monocular vision for cube star

Publications (2)

Publication Number Publication Date
CN115760984A true CN115760984A (en) 2023-03-07
CN115760984B CN115760984B (en) 2024-09-10

Family

ID=85335711

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211470026.3A Active CN115760984B (en) 2022-11-23 2022-11-23 Non-cooperative target pose measurement method based on monocular vision for cube star

Country Status (1)

Country Link
CN (1) CN115760984B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116681733A (en) * 2023-08-03 2023-09-01 南京航空航天大学 Near-distance real-time pose tracking method for space non-cooperative target

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005173128A (en) * 2003-12-10 2005-06-30 Hitachi Ltd Contour shape extractor
CN103644918A (en) * 2013-12-02 2014-03-19 中国科学院空间科学与应用研究中心 Method for performing positioning processing on lunar exploration data by satellite
CN103745458A (en) * 2013-12-26 2014-04-23 华中科技大学 A robust method for estimating the rotation axis and mass center of a spatial target based on a binocular optical flow
US20150235380A1 (en) * 2012-11-19 2015-08-20 Ihi Corporation Three-dimensional object recognition device and three-dimensional object recognition method
US20160189381A1 (en) * 2014-10-27 2016-06-30 Digimarc Corporation Signal detection, recognition and tracking with feature vector transforms
CN106558074A (en) * 2015-09-18 2017-04-05 河北工业大学 Coarse-fine combination matching algorithm in assemble of the satellite based on rotational transformation matrix
CN108562274A (en) * 2018-04-20 2018-09-21 南京邮电大学 A kind of noncooperative target pose measuring method based on marker
CN109708649A (en) * 2018-12-07 2019-05-03 中国空间技术研究院 A method and system for determining the attitude of a remote sensing satellite
CN111063021A (en) * 2019-11-21 2020-04-24 西北工业大学 A method and device for establishing a three-dimensional reconstruction model of a space moving target
US20200302247A1 (en) * 2019-03-19 2020-09-24 Ursa Space Systems Inc. Systems and methods for angular feature extraction from satellite imagery
CN111768447A (en) * 2020-07-01 2020-10-13 哈工大机器人(合肥)国际创新研究院 Monocular camera object pose estimation method and system based on template matching
CN112066879A (en) * 2020-09-11 2020-12-11 哈尔滨工业大学 Device and method for pose measurement of air-floating motion simulator based on computer vision
CN114295092A (en) * 2021-12-29 2022-04-08 航天科工智能运筹与信息安全研究院(武汉)有限公司 Satellite radiometer thermal deformation error compensation method based on quaternion scanning imaging model
US20220134639A1 (en) * 2019-06-12 2022-05-05 Vadient Optics, Llc Additive manufacture using composite material arranged within a mechanically robust matrix

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005173128A (en) * 2003-12-10 2005-06-30 Hitachi Ltd Contour shape extractor
US20150235380A1 (en) * 2012-11-19 2015-08-20 Ihi Corporation Three-dimensional object recognition device and three-dimensional object recognition method
CN103644918A (en) * 2013-12-02 2014-03-19 中国科学院空间科学与应用研究中心 Method for performing positioning processing on lunar exploration data by satellite
CN103745458A (en) * 2013-12-26 2014-04-23 华中科技大学 A robust method for estimating the rotation axis and mass center of a spatial target based on a binocular optical flow
US20160189381A1 (en) * 2014-10-27 2016-06-30 Digimarc Corporation Signal detection, recognition and tracking with feature vector transforms
CN106558074A (en) * 2015-09-18 2017-04-05 河北工业大学 Coarse-fine combination matching algorithm in assemble of the satellite based on rotational transformation matrix
CN108562274A (en) * 2018-04-20 2018-09-21 南京邮电大学 A kind of noncooperative target pose measuring method based on marker
CN109708649A (en) * 2018-12-07 2019-05-03 中国空间技术研究院 A method and system for determining the attitude of a remote sensing satellite
US20200302247A1 (en) * 2019-03-19 2020-09-24 Ursa Space Systems Inc. Systems and methods for angular feature extraction from satellite imagery
US20220134639A1 (en) * 2019-06-12 2022-05-05 Vadient Optics, Llc Additive manufacture using composite material arranged within a mechanically robust matrix
CN111063021A (en) * 2019-11-21 2020-04-24 西北工业大学 A method and device for establishing a three-dimensional reconstruction model of a space moving target
CN111768447A (en) * 2020-07-01 2020-10-13 哈工大机器人(合肥)国际创新研究院 Monocular camera object pose estimation method and system based on template matching
CN112066879A (en) * 2020-09-11 2020-12-11 哈尔滨工业大学 Device and method for pose measurement of air-floating motion simulator based on computer vision
CN114295092A (en) * 2021-12-29 2022-04-08 航天科工智能运筹与信息安全研究院(武汉)有限公司 Satellite radiometer thermal deformation error compensation method based on quaternion scanning imaging model

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
RONGHUA DU等: ""A_vision-based_relative_navigation_sensor_for_on-orbit_servicing_of_CubeSats"", 《 2021 7TH INTERNATIONAL CONFERENCE ON MECHANICAL ENGINEERING AND AUTOMATION SCIENCE (ICMEAS)》, 20 December 2021 (2021-12-20) *
张小俊;张明路;白丰;孙凌宇;: "关于卫星机器人的目标特征点匹配研究", 计算机仿真, no. 05, 15 May 2016 (2016-05-15) *
肖鹏;周志峰;: "非接触目标相对姿态测量方法研究", 计算机测量与控制, no. 04, 25 April 2019 (2019-04-25) *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116681733A (en) * 2023-08-03 2023-09-01 南京航空航天大学 Near-distance real-time pose tracking method for space non-cooperative target
CN116681733B (en) * 2023-08-03 2023-11-07 南京航空航天大学 A short-distance real-time pose tracking method for non-cooperative targets in space

Also Published As

Publication number Publication date
CN115760984B (en) 2024-09-10

Similar Documents

Publication Publication Date Title
CN108052942B (en) A visual image recognition method for aircraft flight attitude
CN111311679B (en) Free floating target pose estimation method based on depth camera
CN109949361A (en) An Attitude Estimation Method for Rotor UAV Based on Monocular Vision Positioning
CN108562274A (en) A kind of noncooperative target pose measuring method based on marker
Petit et al. A robust model-based tracker combining geometrical and color edge information
CN112734844B (en) Monocular 6D pose estimation method based on octahedron
CN113888461A (en) Method, system and equipment for detecting defects of hardware parts based on deep learning
Li et al. Vision-based target detection and positioning approach for underwater robots
CN110222661B (en) Feature extraction method for moving target identification and tracking
CN113313116B (en) Underwater artificial target accurate detection and positioning method based on vision
CN113947724A (en) An automatic measurement method of line icing thickness based on binocular vision
CN115100136A (en) Workpiece category and pose estimation method based on YOLOv4-tiny model
CN111080685A (en) A method and system for 3D reconstruction of aircraft sheet metal parts based on multi-view stereo vision
CN108320310B (en) Image sequence-based space target three-dimensional attitude estimation method
CN116844124A (en) Three-dimensional object detection frame labeling method, three-dimensional object detection frame labeling device, electronic equipment and storage medium
CN111260736B (en) In-orbit real-time calibration method for internal parameters of space camera
CN113295171A (en) Monocular vision-based attitude estimation method for rotating rigid body spacecraft
CN115760984B (en) Non-cooperative target pose measurement method based on monocular vision for cube star
CN111626241A (en) Face detection method and device
CN110440792A (en) Navigation Information Extraction Method Based on Irregularity of Small Celestial Body
CN115049842A (en) Aircraft skin image damage detection and 2D-3D positioning method
CN111735447B (en) Star-sensitive-simulated indoor relative pose measurement system and working method thereof
CN115131433B (en) Non-cooperative target pose processing method and device and electronic equipment
CN113592953B (en) Binocular non-cooperative target pose measurement method based on feature point set
CN109690555B (en) Curvature-based face detector

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant