[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN110782492A - Pose tracking method and device - Google Patents

Pose tracking method and device Download PDF

Info

Publication number
CN110782492A
CN110782492A CN201910950626.1A CN201910950626A CN110782492A CN 110782492 A CN110782492 A CN 110782492A CN 201910950626 A CN201910950626 A CN 201910950626A CN 110782492 A CN110782492 A CN 110782492A
Authority
CN
China
Prior art keywords
pose
freedom
tracking
pixels
tracked object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910950626.1A
Other languages
Chinese (zh)
Other versions
CN110782492B (en
Inventor
唐创奇
李卓
李宇光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung China Semiconductor Co Ltd
Samsung Electronics Co Ltd
Original Assignee
Samsung China Semiconductor Co Ltd
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung China Semiconductor Co Ltd, Samsung Electronics Co Ltd filed Critical Samsung China Semiconductor Co Ltd
Priority to CN201910950626.1A priority Critical patent/CN110782492B/en
Publication of CN110782492A publication Critical patent/CN110782492A/en
Priority to KR1020200114552A priority patent/KR20210042011A/en
Priority to US17/063,909 priority patent/US11610330B2/en
Application granted granted Critical
Publication of CN110782492B publication Critical patent/CN110782492B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

A pose tracking method and device are provided. The pose tracking method comprises the following steps: acquiring an image of a tracking object, wherein a mark which flickers at a specific frequency is arranged on the tracking object; acquiring pixels with changed brightness from the acquired image; the 6-degree-of-freedom posture of the tracked object is calculated based on the acquired pixels, so that the dependency of posture tracking on the specific layout of the LED markers is reduced, the delay of posture tracking is reduced, and the precision and the efficiency of posture tracking are improved.

Description

位姿跟踪方法及装置Pose tracking method and device

技术领域technical field

本公开涉及计算机视觉技术领域。更具体地,本公开涉及一种位姿跟踪方法及装置。The present disclosure relates to the technical field of computer vision. More specifically, the present disclosure relates to a pose tracking method and apparatus.

背景技术Background technique

近几年,多种6自由度位姿估计的方法被提出并且广泛应用于机器人抓取、虚拟现实/增强现实和人机交互等领域。虚拟现实和增强现实对系统延迟有很高的要求。如果系统对于头部运动反应较迟钝,那么就会导致用户眩晕、恶心。维尔福(Valve)的系统的延迟在7-15毫秒。目前商用的虚拟现实(VR)追踪产品最低的延迟是15毫秒,还不能让用户拥有完美的沉浸式体验。In recent years, a variety of 6-DOF pose estimation methods have been proposed and widely used in the fields of robotic grasping, virtual reality/augmented reality, and human-computer interaction. Virtual reality and augmented reality have high demands on system latency. If the system is slow to respond to head movements, it can cause dizziness and nausea to the user. Valve's system has a latency of 7-15 milliseconds. The lowest latency of current commercial virtual reality (VR) tracking products is 15 milliseconds, which does not allow users to have a perfectly immersive experience.

当前很多光学跟踪系统都是基于互补金属氧化物半导体(CMOS)相机的。但是这种消费机的CMOS相机延迟一般都大于16.7毫秒(60FPS)。硬件的限制导致这些方法不能够及时的将用户的动作输入显示在屏幕上,不能满足VR的低延迟要求。Many current optical tracking systems are based on complementary metal-oxide-semiconductor (CMOS) cameras. However, the CMOS camera delay of this kind of consumer machine is generally greater than 16.7 milliseconds (60FPS). Due to hardware limitations, these methods cannot display the user's action input on the screen in a timely manner, and cannot meet the low-latency requirements of VR.

在一些方案中也使用主动式发光二极管(LED)标记来恢复物体的6自由度姿态。但是这些方法都有一些限制,要么LED灯的数目必须是四个且是共面的,要么计算量比超过预设偏差阈值不能应用于实时的系统。仅仅有4个LED灯会影响位姿跟踪的精度和系统的健壮性,这是因为如果其中有一个灯没有被检测到,那么位姿解算就会失败。另外,暴力求解2D/3D点集的对应关系非常耗时,不能够应用于LED灯较多的情况下和实时系统中。Active light-emitting diode (LED) markers are also used in some schemes to recover the 6-DOF pose of the object. However, these methods have some limitations, either the number of LED lights must be four and they are coplanar, or the computational cost ratio exceeds a preset deviation threshold and cannot be applied to real-time systems. Having only 4 LED lights affects the accuracy of pose tracking and the robustness of the system, because if one of the lights is not detected, the pose solution fails. In addition, it is very time-consuming to brute force to solve the correspondence between 2D/3D point sets and cannot be applied to real-time systems where there are many LED lights.

发明内容SUMMARY OF THE INVENTION

本公开的示例性实施例在于提供一种位姿跟踪方法及装置,以降低位姿跟踪对LED标记特定布局的依赖性,同时降低位姿跟踪的延迟,并且提高位姿跟踪的精度和效率。Exemplary embodiments of the present disclosure are to provide a pose tracking method and apparatus to reduce the dependency of pose tracking on a specific layout of LED markers, reduce the delay of pose tracking, and improve the accuracy and efficiency of pose tracking.

根据本公开的示例性实施例,提供一种位姿跟踪方法,包括:获取跟踪对象的图像,其中,所述跟踪对象上设置有以特定频率闪烁的标记;从获取的图像中获取亮度变化的像素;基于获取的像素计算所述跟踪对象的6自由度姿态,从而降低了位姿跟踪对LED标记特定布局的依赖性,同时降低了位姿跟踪的延迟,并且提高了位姿跟踪的精度和效率。According to an exemplary embodiment of the present disclosure, a pose tracking method is provided, comprising: acquiring an image of a tracking object, wherein the tracking object is provided with a mark that flickers at a specific frequency; acquiring a brightness change from the acquired image pixel; the 6-DOF pose of the tracked object is calculated based on the acquired pixels, thereby reducing the dependency of the pose tracking on the specific layout of the LED markers, reducing the delay of the pose tracking, and improving the accuracy and efficiency.

可选地,基于获取的像素计算所述跟踪对象的6自由度姿态的步骤可包括:获取所述跟踪对象的惯性测量单元数据,基于获取的惯性测量单元数据估计所述跟踪对象的3自由度姿态,其中,所述3自由度姿态是在所述跟踪对象的本体坐标系下绕x、y、z三个坐标轴旋转的姿态;基于所述3自由度姿态和获取的像素计算所述跟踪对象的6自由度姿态,其中,所述6自由度姿态是在所述跟踪对象的本体坐标系下沿x、y、z三个坐标轴方向的姿态和绕x、y、z三个直角坐标轴旋转的姿态,从而提高了位姿跟踪的精度和效率。Optionally, the step of calculating the 6-DOF attitude of the tracking object based on the acquired pixels may include: acquiring IMU data of the tracking object, and estimating the 3-DOF of the tracking object based on the acquired IMU data. posture, wherein the 3-DOF posture is a posture that rotates around the three coordinate axes of x, y, and z in the body coordinate system of the tracking object; the tracking is calculated based on the 3-DOF posture and the acquired pixels The 6-DOF posture of the object, wherein the 6-DOF posture is the posture along the three coordinate axes of x, y, and z and the three rectangular coordinates around x, y, and z in the body coordinate system of the tracking object The pose of the axis rotation, thereby improving the accuracy and efficiency of pose tracking.

可选地,基于所述3自由度姿态和获取的像素计算所述跟踪对象的6自由度姿态的步骤可包括:基于所述3自由度姿态和获取的像素,求解所述标记的2D点集与3D点集的对应关系,得到所述标记的关于2D点集与3D点集的匹配对,其中,所述2D点集中包括所述标记的像素坐标,所述3D点集中包括所述标记在所述跟踪对象的本体坐标系下的坐标;基于所述匹配对,计算所述跟踪对象的6自由度姿态,从而提高了位姿跟踪的精度和效率。Optionally, the step of calculating the 6-DOF posture of the tracking object based on the 3-DOF posture and the acquired pixels may include: solving the marked 2D point set based on the 3-DOF posture and the acquired pixels. The corresponding relationship with the 3D point set, the matching pair of the 2D point set and the 3D point set of the mark is obtained, wherein the 2D point set includes the pixel coordinates of the mark, and the 3D point set includes the mark in the The coordinates in the body coordinate system of the tracking object; based on the matching pair, the 6-DOF pose of the tracking object is calculated, thereby improving the accuracy and efficiency of pose tracking.

可选地,计算所述跟踪对象的6自由度姿态的步骤可包括:从所述匹配对中去除重投影偏差超过预设偏差阈值的像素,并根据去除后剩余的像素计算6自由度位姿;对计算得到的6自由度位姿进行最小化重投影误差操作,得到所述跟踪对象的6自由度姿态,从而提高了位姿跟踪的精度和效率。Optionally, the step of calculating the 6-DOF pose of the tracked object may include: removing pixels whose reprojection deviation exceeds a preset deviation threshold from the matched pair, and calculating the 6-DOF pose according to the remaining pixels after removal. ; Perform a reprojection error minimization operation on the calculated 6-DOF pose to obtain the 6-DOF pose of the tracked object, thereby improving the accuracy and efficiency of pose tracking.

可选地,计算所述跟踪对象的6自由度姿态的步骤可包括:从所述匹配对中去除重投影偏差超过预设偏差阈值的像素,并根据去除后剩余的像素计算6自由度位姿;对计算得到的6自由度位姿进行最小化重投影误差操作;根据所述3自由度姿态对最小化重投影误差操作后的6自由度位姿进行优化,得到所述跟踪对象的6自由度姿态,从而提高了位姿跟踪的精度和效率。Optionally, the step of calculating the 6-DOF pose of the tracked object may include: removing pixels whose reprojection deviation exceeds a preset deviation threshold from the matched pair, and calculating the 6-DOF pose according to the remaining pixels after removal. ; Minimize the re-projection error operation on the calculated 6-DOF pose; optimize the 6-DOF pose after the re-projection error minimization operation according to the 3-DOF pose, and obtain the 6-DOF pose of the tracking object pose, thereby improving the accuracy and efficiency of pose tracking.

可选地,得到所述跟踪对象的6自由度姿态之后,所述位姿跟踪方法还可包括:根据所述跟踪对象的6自由度姿态对所述匹配对中的重投影误差超过预设偏差阈值的像素进行重新匹配,以用于后续的位姿跟踪。Optionally, after obtaining the 6-DOF attitude of the tracking object, the pose tracking method may further include: according to the 6-DOF attitude of the tracking object, the reprojection error in the matching pair exceeds a preset deviation. The thresholded pixels are rematched for subsequent pose tracking.

根据本公开的示例性实施例,提供一种位姿跟踪装置,包括:图像获取单元,被配置为获取跟踪对象的图像,其中,所述跟踪对象上设置有以特定频率闪烁的标记;像素获取单元,被配置为从获取的图像中获取亮度变化的像素;和姿态计算单元,被配置为基于获取的像素计算所述跟踪对象的6自由度姿态,从而降低了位姿跟踪对LED标记特定布局的依赖性,同时降低了位姿跟踪的延迟,并且提高了位姿跟踪的精度和效率。According to an exemplary embodiment of the present disclosure, a pose tracking device is provided, comprising: an image acquisition unit configured to acquire an image of a tracking object, wherein the tracking object is provided with a mark that flickers at a specific frequency; pixel acquisition a unit configured to acquire pixels of varying brightness from the acquired image; and an attitude calculation unit configured to calculate a 6-DOF attitude of the tracked object based on the acquired pixels, thereby reducing the effect of pose tracking on a specific layout of LED markers At the same time, the delay of pose tracking is reduced, and the accuracy and efficiency of pose tracking are improved.

可选地,姿态计算单元可被配置为:获取所述跟踪对象的惯性测量单元数据,基于获取的惯性测量单元数据估计所述跟踪对象的3自由度姿态,其中,所述3自由度姿态是在所述跟踪对象的本体坐标系下绕x、y、z三个坐标轴旋转的姿态;基于所述3自由度姿态和获取的像素计算所述跟踪对象的6自由度姿态,其中,所述6自由度姿态是在所述跟踪对象的本体坐标系下沿x、y、z三个坐标轴方向的姿态和绕x、y、z三个直角坐标轴旋转的姿态,从而提高了位姿跟踪的精度和效率。Optionally, the attitude calculation unit may be configured to: acquire inertial measurement unit data of the tracked object, and estimate a 3-DOF attitude of the tracked object based on the acquired inertial measurement unit data, wherein the 3-DOF attitude is The posture of rotating around the three coordinate axes of x, y, and z in the body coordinate system of the tracking object; the 6-DOF posture of the tracking object is calculated based on the 3-DOF posture and the acquired pixels, wherein the The 6-DOF posture is the posture along the three coordinate axes of x, y, and z and the posture of rotating around the three rectangular coordinate axes of x, y, and z under the body coordinate system of the tracking object, thereby improving the posture tracking. accuracy and efficiency.

可选地,姿态计算单元还可被配置为:基于所述3自由度姿态和获取的像素,求解所述标记的2D点集与3D点集的对应关系,得到所述标记的关于2D点集与3D点集的匹配对,其中,所述2D点集中包括所述标记的像素坐标,所述3D点集中包括所述标记在所述跟踪对象的本体坐标系下的坐标;基于所述匹配对,计算所述跟踪对象的6自由度姿态,从而提高了位姿跟踪的精度和效率。Optionally, the attitude calculation unit may also be configured to: based on the 3-DOF attitude and the acquired pixels, solve the correspondence between the marked 2D point set and the 3D point set, and obtain the marked 2D point set. a matching pair with a 3D point set, wherein the 2D point set includes the pixel coordinates of the marker, and the 3D point set includes the coordinates of the marker in the body coordinate system of the tracked object; based on the matching pair , the 6-DOF pose of the tracked object is calculated, thereby improving the accuracy and efficiency of pose tracking.

可选地,姿态计算单元还可被配置为:从所述匹配对中去除重投影偏差超过预设偏差阈值的像素,并根据去除后剩余的像素计算6自由度位姿;对计算得到的6自由度位姿进行最小化重投影误差操作,得到所述跟踪对象的6自由度姿态,从而提高了位姿跟踪的精度和效率。Optionally, the pose calculation unit can also be configured to: remove the pixels whose reprojection deviation exceeds the preset deviation threshold from the matching pair, and calculate the 6-DOF pose according to the remaining pixels after the removal; The DOF pose is performed to minimize the reprojection error to obtain the 6-DOF pose of the tracked object, thereby improving the accuracy and efficiency of pose tracking.

可选地,姿态计算单元还可被配置为:从所述匹配对中去除重投影偏差超过预设偏差阈值的像素,并根据去除后剩余的像素计算6自由度位姿;对计算得到的6自由度位姿进行最小化重投影误差操作;将所述3自由度姿态与最小化重投影误差操作后的6自由度位姿进行融合,得到所述跟踪对象的6自由度姿态,从而提高了位姿跟踪的精度和效率。Optionally, the pose calculation unit can also be configured to: remove the pixels whose reprojection deviation exceeds the preset deviation threshold from the matching pair, and calculate the 6-DOF pose according to the remaining pixels after the removal; The DOF pose is operated to minimize the reprojection error; the 3DOF pose is fused with the 6DOF pose after the operation to minimize the reprojection error, and the 6DOF pose of the tracking object is obtained, thereby improving the performance of the tracking object. Accuracy and efficiency of pose tracking.

可选地,所述位姿跟踪装置还可包括:重新匹配单元,被配置为在得到所述跟踪对象的6自由度姿态之后,根据所述跟踪对象的6自由度姿态对重投影误差超过预设偏差阈值的像素进行重新匹配,以用于后续的位姿跟踪。Optionally, the pose tracking device may further include: a re-matching unit configured to, after obtaining the 6-DOF pose of the tracked object, make the re-projection error exceed a predetermined amount according to the 6-DOF pose of the tracked object. The pixels with the deviation threshold are re-matched for subsequent pose tracking.

根据本公开的示例性实施例,提供一种电子装置,包括:相机,用于获取跟踪对象的图像,并且从获取的图像中获取亮度变化的像素,其中,所述跟踪对象上设置有以特定频率闪烁的标记;处理器,用于基于所述相机获取的像素计算所述跟踪对象的6自由度姿态,从而降低了位姿跟踪对LED标记特定布局的依赖性,同时降低了位姿跟踪的延迟,并且提高了位姿跟踪的精度和效率。According to an exemplary embodiment of the present disclosure, there is provided an electronic device, comprising: a camera for acquiring an image of a tracking object, and acquiring pixels whose brightness changes from the acquired image, wherein the tracking object is provided with a specific A marker that flickers at a frequency; a processor for calculating the 6-DOF pose of the tracked object based on the pixels acquired by the camera, thereby reducing the dependence of pose tracking on the specific layout of the LED marker, and reducing the complexity of pose tracking. delay, and improve the accuracy and efficiency of pose tracking.

根据本公开的示例性实施例,提供一种计算机可读存储介质,其上存储有计算机程序,当所述计算机程序被处理器执行时,实现根据本公开的示例性实施例的位姿跟踪方法。According to an exemplary embodiment of the present disclosure, there is provided a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, implements the pose tracking method according to the exemplary embodiment of the present disclosure .

根据本公开的示例性实施例,提供一种计算装置,包括:处理器;存储器,存储有计算机程序,当所述计算机程序被处理器执行时,实现根据本公开的示例性实施例的位姿跟踪方法。According to an exemplary embodiment of the present disclosure, there is provided a computing device, comprising: a processor; a memory storing a computer program that, when executed by the processor, implements a pose according to an exemplary embodiment of the present disclosure tracking method.

根据本公开的示例性实施例的位姿跟踪方法及装置,通过获取跟踪对象的图像,从获取的图像中获取亮度变化的像素,基于获取的像素计算跟踪对象的6自由度姿态,从而降低了位姿跟踪对LED标记特定布局的依赖性,同时降低了位姿跟踪的延迟,并且提高了位姿跟踪的精度和效率。According to the pose tracking method and device according to the exemplary embodiments of the present disclosure, by acquiring an image of the tracking object, acquiring pixels with brightness changes from the acquired image, and calculating the 6-DOF pose of the tracking object based on the acquired pixels, thereby reducing the The dependence of pose tracking on the specific layout of LED markers, while reducing the latency of pose tracking, and improving the accuracy and efficiency of pose tracking.

将在接下来的描述中部分阐述本公开总体构思另外的方面和/或优点,还有一部分通过描述将是清楚的,或者可以经过本公开总体构思的实施而得知。Additional aspects and/or advantages of the present disclosure will be set forth in part in the description that follows, and in part will be apparent from the description, or may be learned by practice of the present disclosure.

附图说明Description of drawings

通过下面结合示例性地示出实施例的附图进行的描述,本公开示例性实施例的上述和其他目的和特点将会变得更加清楚,其中:The above and other objects and features of exemplary embodiments of the present disclosure will become more apparent from the following description taken in conjunction with the accompanying drawings that exemplarily illustrate embodiments, wherein:

图1示出根据本公开示例性实施例的位姿跟踪方法的流程图;1 shows a flowchart of a pose tracking method according to an exemplary embodiment of the present disclosure;

图2示出跟踪对象的2D主动式LED标记跟踪结果;Figure 2 shows the 2D active LED marker tracking results of the tracked object;

图3示出根据本公开示例性实施例的位姿跟踪装置的框图;3 shows a block diagram of a pose tracking apparatus according to an exemplary embodiment of the present disclosure;

图4示出根据本公开示例性实施例的电子装置的示意图;和FIG. 4 shows a schematic diagram of an electronic device according to an exemplary embodiment of the present disclosure; and

图5示出根据本公开示例性实施例的计算装置的示意图。5 shows a schematic diagram of a computing device according to an exemplary embodiment of the present disclosure.

具体实施方式Detailed ways

现将详细参照本公开的示例性实施例,所述实施例的示例在附图中示出,其中,相同的标号始终指的是相同的部件。以下将通过参照附图来说明所述实施例,以便解释本公开。Reference will now be made in detail to the exemplary embodiments of the present disclosure, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to like parts throughout. The embodiments are described below in order to explain the present disclosure by referring to the figures.

图1示出根据本公开示例性实施例的位姿跟踪方法的流程图。图1中示出的位姿跟踪方法适用于设置有多个以特定频率闪烁的标记的跟踪对象,其中,以特定频率闪烁的标记可以是例如主动式LED。在以下说明中,以LED灯作为标记的示例进行说明,但是应理解本发明不限于此。本领域的技术人员可根据实施例的需要采用其他形式的标记。FIG. 1 shows a flowchart of a pose tracking method according to an exemplary embodiment of the present disclosure. The pose tracking method shown in FIG. 1 is suitable for a tracking object provided with a plurality of markers flashing at a specific frequency, wherein the markers flashing at a specific frequency may be, for example, active LEDs. In the following description, the LED lamp is used as an example of a symbol, but it should be understood that the present invention is not limited to this. Those skilled in the art may adopt other forms of marking according to the needs of the embodiment.

参照图1,在步骤S101,获取跟踪对象的图像。Referring to FIG. 1, in step S101, an image of a tracking object is acquired.

在本公开的示例性实施例中,可以通过DVS相机来获取跟踪对象的图像。图1中示出的位姿跟踪方法可适用于具有能够获取亮度变化的像素的相机和能够进行计算的主机的电子装置,或者可适用于由能够获取亮度变化的像素的相机和能够进行计算的主机组成的系统。In an exemplary embodiment of the present disclosure, an image of a tracked object may be acquired through a DVS camera. The pose tracking method shown in FIG. 1 may be applied to an electronic device having a camera capable of acquiring pixels of brightness change and a host capable of performing calculations, or may be applied to an electronic device having a camera capable of acquiring pixels of brightness change and a computer capable of performing computation A system of hosts.

在步骤S102,从获取的图像中获取亮度变化的像素。In step S102, pixels with varying brightness are acquired from the acquired image.

在本公开的示例性实施例中,DVS相机在步骤S101获得图像后不是直接将图像传输到主机,而是在步骤S102从获取的图像中获取亮度变化的像素,之后,将获取的像素传输到主机以用于位姿跟踪,从而减少了传输的数据量和用于计算的数据量,降低了位姿跟踪的延迟。In an exemplary embodiment of the present disclosure, the DVS camera does not directly transmit the image to the host after acquiring the image in step S101, but acquires pixels with brightness changes from the acquired image in step S102, and then transmits the acquired pixels to The host is used for pose tracking, thereby reducing the amount of data transmitted and the amount of data used for calculation, and reducing the delay of pose tracking.

具体来说,当标记(例如,主动式LED灯)闪烁的时候,DVS能够产生对应的on和off事件。这些事件通过LED闪烁的频率可以很容易与其它事件区分开。这些筛选出来的事件可以分成不同的簇,每一簇代表一个LED灯。然后可使用一个基于区域生长的聚类算法和轻量级的投票算法来处理这些筛选出来的事件,将簇中密度最高的点识别为标记的中心。另外,可以使用全局最近邻跟踪方法和基于匀速模型的卡尔曼滤波来跟踪多个标记,从而降低了标记的漏检和错检的概率。Specifically, when a marker (eg, an active LED light) blinks, the DVS can generate corresponding on and off events. These events can be easily distinguished from other events by the frequency of LED blinking. These filtered events can be divided into different clusters, each cluster representing an LED light. These filtered events can then be processed using a region-growing-based clustering algorithm and a lightweight voting algorithm, identifying the point with the highest density in the cluster as the center of the marker. In addition, multiple markers can be tracked using a global nearest neighbor tracking method and a uniform model-based Kalman filter, thereby reducing the probability of missed and false detections of markers.

在步骤S103,基于获取的像素计算跟踪对象的6自由度姿态。这里,6自由度姿态表示的是在跟踪对象的本体坐标系下沿x、y、z三个坐标轴方向的姿态和绕x、y、z三个直角坐标轴旋转的姿态。In step S103, the 6-DOF pose of the tracked object is calculated based on the acquired pixels. Here, the 6-DOF attitude represents the attitude along the three coordinate axes of x, y, and z and the attitude of rotation around the three rectangular coordinate axes of x, y, and z in the body coordinate system of the tracking object.

在本公开的示例性实施例中,仅仅使用由于运动引起的亮度变化的那些像素来计算跟踪对象的6自由度姿态,从而降低了位姿跟踪的延迟。In an exemplary embodiment of the present disclosure, only those pixels whose brightness changes due to motion are used to calculate the 6-DOF pose of the tracked object, thereby reducing the latency of pose tracking.

在本公开的示例性实施例中,在基于获取的像素计算跟踪对象的6自由度姿态时,可首先获取跟踪对象的惯性测量单元(IMU)数据,基于获取的惯性测量单元(IMU)数据估计跟踪对象的3自由度姿态(即,在跟踪对象的本体坐标系下绕x、y、z三个坐标轴旋转的姿态),然后基于3自由度姿态和获取的像素计算跟踪对象的6自由度姿态,从而提高了位姿跟踪的精度和效率。In an exemplary embodiment of the present disclosure, when calculating the 6-DOF pose of the tracked object based on the acquired pixels, inertial measurement unit (IMU) data of the tracked object may be acquired first, and estimation based on the acquired inertial measurement unit (IMU) data may be performed. The 3DOF pose of the tracked object (that is, the pose rotated around the three axes of x, y, and z in the body coordinate system of the tracked object), and then the 6DOF of the tracked object is calculated based on the 3DOF pose and the acquired pixels pose, thereby improving the accuracy and efficiency of pose tracking.

在本公开的示例性实施例中,在基于3自由度姿态和获取的像素计算跟踪对象的6自由度姿态时,可首先基于3自由度姿态和获取的像素,求解标记的2D点集与3D点集的对应关系,得到标记的关于2D点集与3D点集的匹配对,然后基于匹配对,计算跟踪对象的6自由度姿态。这里,2D点集中包括各个标记的像素坐标,3D点集中包括各个标记在跟踪对象的本体坐标系下的坐标。In an exemplary embodiment of the present disclosure, when calculating the 6-DOF pose of the tracked object based on the 3-DOF pose and the acquired pixels, first, based on the 3-DOF pose and the acquired pixels, the marked 2D point set and the 3D The corresponding relationship of the point set is obtained, and the marked matching pair about the 2D point set and the 3D point set is obtained, and then based on the matching pair, the 6-DOF pose of the tracked object is calculated. Here, the 2D point set includes the pixel coordinates of each marker, and the 3D point set includes the coordinates of each marker in the body coordinate system of the tracked object.

具体来说,在基于3自由度姿态和获取的像素计算跟踪对象的6自由度姿态时,可定义pIA、pIB分别是图像上已经去畸变的两个LED灯(LED灯A和LED灯B)的像素坐标,pA、pB分别是LED灯A、LED灯B在物体本体坐标系下的坐标。跟踪对象的3自由度姿态矩阵R=[r1,r2,r3]T可以由航姿参考系统通过IMU数据估计出来,r1、r2和r3分别表示R的第一行、第二行和第三行的行向量。t=[tx,ty,tz]T是未知的平移向量,其中,tx,ty,tz分别表示沿x、y、z三个坐标轴方向的位移。通过小孔成像原理,可以得到以下方程:Specifically, when calculating the 6-DOF pose of the tracked object based on the 3-DOF pose and the acquired pixels, it can be defined that p IA and p IB are the two LED lights (LED light A and LED light) that have been de-distorted on the image, respectively. B) pixel coordinates, p A and p B are the coordinates of LED light A and LED light B in the object body coordinate system, respectively. The 3-DOF attitude matrix R=[r 1 , r 2 , r 3 ] of the tracked object can be estimated by the heading reference system through the IMU data, r 1 , r 2 and r 3 represent the first row of R Row vector for the second and third row. t=[t x , ty , t z ] T is an unknown translation vector, where t x , ty , and t z represent displacements along the three coordinate axes of x, y, and z, respectively. Through the principle of pinhole imaging, the following equation can be obtained:

Figure BDA0002225546750000061
Figure BDA0002225546750000061

其中,xA、yA分别表示LED灯A在图像上的x坐标和y坐标。xB、yB分别表示LED灯B在图像上的x坐标和y坐标。上述方程只有t是未知的,4个方程,3个未知量,解方程可得:Among them, x A and y A represent the x-coordinate and y-coordinate of the LED light A on the image, respectively. x B , y B represent the x-coordinate and y-coordinate of the LED light B on the image, respectively. In the above equations, only t is unknown. There are 4 equations and 3 unknowns. Solving the equations can get:

t=AzpIA-RpA (3)t=A z p IA -Rp A (3)

其中,

Figure BDA0002225546750000063
in,
Figure BDA0002225546750000063

可通过DVS相机检测和聚类算法,得到LED灯在图像的像素坐标点集O和已知的LED灯在物体本体坐标系下的坐标点集L。从点集O、L中任意选择两个点(o,l)进行配对,那么可以通过(3)式计算得到平移t。通过上述操作,可以得到可能的平移向量列表T。其中有些无效的平移向量(向量各个元素太大或者tz是负的)可以删除,一些近似相等的平移向量可以合并。对于T中任意一个有效的平移向量tvalid,可以通过Moller-Trumbore射线相交算法确定可见其对应的可见LED灯的点集Lv:如果某个LED灯与相机的确定的射线与物体相交的第一个点是此LED灯,那么这个LED灯就是可见的,否则这个LED灯在此位姿下是不可见的。确定可见的LED灯点集对于多个LED灯的情况可以减少计算量和误匹配的情况。然后将可见点集Lv用对应的6自由度位姿和相机内参投影到像平面P上。这样就可以使用Kuhn-Munkres算法来求解可见点集Lv和观测点集O的最佳匹配和匹配误差。遍历可能的平移向量列表T,匹配误差最小的那组匹配就是正确的关于2D点集与3D点集的匹配对。Through the DVS camera detection and clustering algorithm, the pixel coordinate point set O of the LED light in the image and the coordinate point set L of the known LED light in the object body coordinate system can be obtained. Two points (o, l) are arbitrarily selected from the point sets O and L for pairing, then the translation t can be calculated by formula (3). Through the above operations, a list T of possible translation vectors can be obtained. Some invalid translation vectors (the elements of the vector are too large or t z are negative) can be deleted, and some translation vectors that are approximately equal can be merged. For any valid translation vector t valid in T, the Moller-Trumbore ray intersection algorithm can be used to determine the point set L v where the corresponding visible LED light is visible: if a certain LED light and the camera's determined ray intersect the object A point is the LED light, then the LED light is visible, otherwise the LED light is invisible in this pose. Determining the set of visible LED light points can reduce computation and mis-matching in the case of multiple LED lights. Then the visible point set L v is projected onto the image plane P with the corresponding 6-DOF pose and camera intrinsics. In this way, the Kuhn-Munkres algorithm can be used to solve the best matching and matching error of the visible point set L v and the observation point set O. Traversing the list of possible translation vectors T, the set of matches with the smallest matching error is the correct matching pair about the 2D point set and the 3D point set.

在本公开的示例性实施例中,在计算跟踪对象的6自由度姿态时,可首先从匹配对中去除重投影偏差超过预设偏差阈值的像素,并根据去除后剩余的像素计算6自由度位姿,然后对计算得到的6自由度位姿进行最小化重投影误差操作,得到跟踪对象的6自由度姿态,从而进一步提高了位姿跟踪的精度和效率。In an exemplary embodiment of the present disclosure, when calculating the 6-DOF pose of the tracked object, pixels whose reprojection deviation exceeds a preset deviation threshold may be first removed from the matching pair, and the 6-DOF is calculated according to the remaining pixels after removal. Then, the calculated 6-DOF pose is performed to minimize the reprojection error to obtain the 6-DOF pose of the tracked object, thereby further improving the accuracy and efficiency of pose tracking.

在本公开的示例性实施例中,在计算跟踪对象的6自由度姿态时,可首先从匹配对中去除重投影偏差超过预设偏差阈值的像素,并根据去除后剩余的像素计算6自由度位姿,然后对计算得到的6自由度位姿进行最小化重投影误差操作,之后根据3自由度姿态对最小化重投影误差操作后的6自由度位姿进行优化,得到跟踪对象的6自由度姿态,从而进一步提高了位姿跟踪的精度和效率。In an exemplary embodiment of the present disclosure, when calculating the 6-DOF pose of the tracked object, pixels whose reprojection deviation exceeds a preset deviation threshold may be first removed from the matching pair, and the 6-DOF is calculated according to the remaining pixels after removal. pose, and then perform the operation of minimizing the reprojection error on the calculated 6-DOF pose, and then optimize the 6-DOF pose after minimizing the re-projection error operation according to the 3-DOF pose to obtain the 6-DOF of the tracked object. pose, thereby further improving the accuracy and efficiency of pose tracking.

在本公开的示例性实施例中,在得到跟踪对象的6自由度姿态之后,还可根据跟踪对象的6自由度姿态对匹配对中的重投影误差超过预设偏差阈值的像素进行重新匹配。此外,还可以更新新观测到的标记(例如,主动式LED灯)的2D点集与3D点集的对应关系或者2D点集与3D点集的匹配对,从而进一步提高了位姿跟踪的精度和效率。In an exemplary embodiment of the present disclosure, after the 6-DOF pose of the tracked object is obtained, the pixels whose re-projection error exceeds a preset deviation threshold in the matching pair can also be re-matched according to the 6-DOF pose of the tracked object. In addition, the correspondence between 2D point sets and 3D point sets or matching pairs of 2D point sets and 3D point sets for newly observed markers (e.g., active LED lights) can also be updated, thereby further improving the accuracy of pose tracking and efficiency.

具体来说,在得到2D点集与3D点集的对应关系之后,可首先利用随机抽样一致(Random Sample Consensus,简称RANSAC)算法去除匹配对中的重投影偏差超过预设偏差阈值的点,然后用有效透视n点定位(Efficient Per Efficient Perspective-n-Pointspective-n-Point,简称EPnP)算法求解6自由度位姿。接着用光束法平差(BundleAdjustment,简称BA)算法对EPnP算法求解得到的粗糙的位姿进一步优化,这样就能得到更为精确的位姿。可利用这个精确的位姿对匹配对中的重投影误差超过预设偏差阈值的点进行重新匹配并更新新观测到的LED的匹配关系。最后可基于扩展卡尔曼滤波器的传感器融合算法对上述得到的6自由度位姿与IMU中的3自由度姿态进行融合,得到更为光滑和一致的6自由度位姿(即,融合后的6自由度位姿)。此外,还可利用融合后的6自由度位姿对重投影误差超过预设偏差阈值的点进行重新匹配并更新新观测到的LED匹配关系。Specifically, after obtaining the corresponding relationship between the 2D point set and the 3D point set, the Random Sample Consensus (RANSAC for short) algorithm can be used to remove the points whose reprojection deviation exceeds the preset deviation threshold in the matching pair, and then Use the Efficient Per Efficient Perspective-n-Pointspective-n-Point (EPnP) algorithm to solve the 6-DOF pose. Then, the Bundle Adjustment (BA) algorithm is used to further optimize the rough pose obtained by the EPnP algorithm, so that a more accurate pose can be obtained. This precise pose can be used to re-match the points in the matching pair where the reprojection error exceeds a preset deviation threshold and update the matching relationship of the newly observed LEDs. Finally, the sensor fusion algorithm based on the extended Kalman filter can be used to fuse the 6-DOF pose obtained above with the 3-DOF pose in the IMU to obtain a smoother and more consistent 6-DOF pose (that is, the fused pose 6DOF pose). In addition, the fused 6-DOF pose can also be used to re-match the points whose re-projection error exceeds the preset deviation threshold and update the newly observed LED matching relationship.

根据本公开的示例性实施例的位姿跟踪方法,降低了位姿跟踪对LED标记特定布局的依赖性,同时降低了位姿跟踪的延迟,并且提高了位姿跟踪的精度和效率。The pose tracking method according to the exemplary embodiment of the present disclosure reduces the dependency of the pose tracking on the specific layout of the LED markers, reduces the delay of the pose tracking, and improves the accuracy and efficiency of the pose tracking.

用于获得跟踪对象的运动而引起亮度变化的像素的DVS相机可以是例如三星的三代VGA设备,其分辨率为640*480,并且可通过USB3.0与主机进行连接。DVS相机能够对相对光照强度变化的像素点产生事件。每个事件用元组<t,x,y,p>来表示,其中t是事件发生的时间戳(微秒级的解析度),x,y是事件对应的像素坐标,p∈{0,1}是事件的极性,其中,当LED灯亮的时候,会产生<t,x,y,1>的事件,当LED灯灭的时候,会产生<t,x,y,0>的事件。The DVS camera used to obtain the pixels whose brightness changes caused by the movement of the tracked object can be, for example, Samsung's third-generation VGA device, whose resolution is 640*480, and can be connected to the host through USB3.0. DVS cameras can generate events for pixels whose relative light intensity changes. Each event is represented by a tuple <t,x,y,p>, where t is the timestamp of the event (microsecond resolution), x,y are the pixel coordinates corresponding to the event, p∈{0, 1} is the polarity of the event, in which, when the LED light is on, the event of <t,x,y,1> will be generated, and when the LED light is off, the event of <t,x,y,0> will be generated .

位姿跟踪方法中需要应用到一些固定的参数(相机内参)和转移矩阵(IMU到手柄本体,DVS相机到头盔等)。本公开的示例性实施例中使用OptiTrack光学运动跟踪系统来校准这些固定的参数和转移矩阵,同时OptiTrack光学运动跟踪系统也用来提供手柄位姿真值对位姿跟踪的精度进行评价。当校准完成后,用户的实际使用过程中,OptiTrack光学运动跟踪系统是不需要的。The pose tracking method needs to apply some fixed parameters (camera internal parameters) and transfer matrix (IMU to handle body, DVS camera to helmet, etc.). In the exemplary embodiment of the present disclosure, the OptiTrack optical motion tracking system is used to calibrate these fixed parameters and transition matrices, and the OptiTrack optical motion tracking system is also used to provide the true value of the handle pose to evaluate the accuracy of the pose tracking. When the calibration is completed, the OptiTrack optical motion tracking system is not required during the actual use of the user.

在OptiTrack光学运动跟踪系统中存在多个坐标系,例如相机(C)坐标系、头盔(H)坐标系、世界(W)坐标系、IMU(I)坐标系、手柄本体(B)坐标系、手柄模型(M)坐标系和OptiTrack(O)坐标系。为了简化OptiTrack光学运动跟踪系统,可将手柄本体坐标系和手柄模型坐标系对齐,将世界坐标系和OptiTrtack坐标系进行对齐。所以要求解的运动手柄的6自由度位姿可以表示为

Figure BDA0002225546750000081
Figure BDA0002225546750000082
是手柄模型坐标系到相机坐标系的旋转矩阵,CPM是模型坐标系原点在相机坐标系下的表示。需要提前标定DVS相机的内参和一些固定的转移矩阵 There are multiple coordinate systems in the OptiTrack optical motion tracking system, such as camera (C) coordinate system, helmet (H) coordinate system, world (W) coordinate system, IMU (I) coordinate system, handle body (B) coordinate system, The handle model (M) coordinate system and the OptiTrack (O) coordinate system. In order to simplify the OptiTrack optical motion tracking system, the handle body coordinate system can be aligned with the handle model coordinate system, and the world coordinate system can be aligned with the OptiTrtack coordinate system. So the 6-DOF pose of the motion handle to be solved can be expressed as
Figure BDA0002225546750000081
Figure BDA0002225546750000082
is the rotation matrix from the handle model coordinate system to the camera coordinate system, and C P M is the representation of the origin of the model coordinate system in the camera coordinate system. The internal parameters of the DVS camera and some fixed transfer matrices need to be calibrated in advance

由于DVS的光学特性与普通CMOS相机是相同的,因此可以用标准的小孔成像模型来确定相机内参(例如,焦距、投影中心和畸变参数)。DVS相机与普通相机的区别是DVS相机不能看到没有光照变化的东西,所以可以用一个闪烁的棋盘格来校准DVS。Since the optical properties of DVS are the same as those of ordinary CMOS cameras, standard pinhole imaging models can be used to determine camera intrinsics (eg, focal length, projection center, and distortion parameters). The difference between a DVS camera and a normal camera is that a DVS camera can't see things without lighting changes, so a flashing checkerboard can be used to calibrate the DVS.

可以用OptiTrack小球来校准一些固定的转移矩阵。在3D模型中,手柄模型的原点是手柄顶端大圆的圆心。可以将OptiTrack小球固定在大圆的圆周,并对应标记模型中X、Y、Z轴的方向,这样就可以在OptiTrack光学运动跟踪系统中确定运动手柄的模型坐标系。将OptiTrack小球固定在IMU芯片处,并利用IMU的读数确定X、Y、Z轴的方向,这样就在OptiTrack光学运动跟踪系统中确定了IMU坐标系。OptiTrack光学运动跟踪系统能够实时记录IMU坐标系和模型坐标系的位姿,那么可以计算运动手柄的模型坐标系与IMU坐标系之间的转换关系。而且头盔到相机的转换矩阵

Figure BDA0002225546750000093
也可以通过那些特定模式的闪烁LED来确定:
Figure BDA0002225546750000094
其中,
Figure BDA0002225546750000095
Figure BDA0002225546750000096
都由OptiTrack光学运动跟踪系统来提供,可以通过用EPnP算法计算得到
Figure BDA0002225546750000097
(特定模式的点集匹配关系是固定的)。计算得到的两个矩阵例如可如下所示:Some fixed transfer matrices can be calibrated with OptiTrack beads. In the 3D model, the origin of the handle model is the center of the great circle at the top of the handle. The OptiTrack ball can be fixed on the circumference of the large circle, and the directions of the X, Y, and Z axes in the marked model can be correspondingly marked, so that the model coordinate system of the motion handle can be determined in the OptiTrack optical motion tracking system. Fix the OptiTrack ball on the IMU chip, and use the readings of the IMU to determine the directions of the X, Y, and Z axes, so that the IMU coordinate system is determined in the OptiTrack optical motion tracking system. The OptiTrack optical motion tracking system can record the pose of the IMU coordinate system and the model coordinate system in real time, and then the conversion relationship between the model coordinate system of the motion handle and the IMU coordinate system can be calculated. And the helmet to camera transformation matrix
Figure BDA0002225546750000093
It can also be determined by those specific patterns of blinking LEDs:
Figure BDA0002225546750000094
in,
Figure BDA0002225546750000095
and
Figure BDA0002225546750000096
All are provided by OptiTrack optical motion tracking system, which can be calculated by EPnP algorithm
Figure BDA0002225546750000097
(The point set matching relationship for a specific pattern is fixed). The calculated two matrices can be, for example, as follows:

Figure BDA0002225546750000092
Figure BDA0002225546750000092

主动式LED标记闪烁间隔可以通过on-off-on、off-on-off的事件确定。如果一个像素的闪烁间隔处于[800μs,1200μs]区间,那么就可以断定这个事件是由LED闪烁引起的。然后使用区域生长算法和投票方法来确定LED的中心位置。可以使用全局最近邻和卡尔曼滤波对多个LED灯进行跟踪。图2示出跟踪对象的2D主动式LED标记跟踪结果。在图2中,每一条连续的线条表示一个标记(例如,LED)的运动轨迹,小空心圈表示运动轨迹的起始点,内有实心圆的大圈表示运动轨迹的终点。The active LED marker blinking interval can be determined by on-off-on, off-on-off events. If the flicker interval of a pixel is in the [800μs, 1200μs] interval, then it can be concluded that the event is caused by LED flickering. A region growing algorithm and voting method are then used to determine the center position of the LEDs. Multiple LED lights can be tracked using global nearest neighbors and Kalman filtering. Figure 2 shows the 2D active LED marker tracking results of the tracked object. In FIG. 2 , each continuous line represents the trajectory of a marker (eg, LED), the small open circle represents the starting point of the trajectory, and the large circle with a solid circle inside represents the end point of the trajectory.

此外,为了提高BA算法的性能,可将BA算法的优化窗口设定为10,而且每4帧才执行一次优化。不用每次都更新匹配关系,仅有当已匹配的LED灯的数目与所有观测到的LED灯的数目的比值小于预设值例如0.6的时候才对匹配关系进行更新,从而提高了BA算法的效率。启动阶段初始化后,整个处理流程仅需要花费1.23毫秒。因此加上LED灯闪烁事件的筛选时间(1毫秒),那么整个延迟是2.23毫秒。In addition, in order to improve the performance of the BA algorithm, the optimization window of the BA algorithm can be set to 10, and the optimization is performed only every 4 frames. Instead of updating the matching relationship every time, the matching relationship is updated only when the ratio of the number of matched LED lights to the number of all observed LED lights is less than a preset value such as 0.6, thereby improving the performance of the BA algorithm. efficiency. After the startup phase is initialized, the entire processing flow takes only 1.23 milliseconds. So adding the filter time for the LED flashing event (1ms), the total delay is 2.23ms.

以上已经结合图1至图2对根据本公开示例性实施例的位姿跟踪方法进行了描述。在下文中,将参照图3对根据本公开示例性实施例的位姿跟踪装置及其单元进行描述。The pose tracking method according to the exemplary embodiment of the present disclosure has been described above with reference to FIGS. 1 to 2 . Hereinafter, a pose tracking apparatus and a unit thereof according to an exemplary embodiment of the present disclosure will be described with reference to FIG. 3 .

图3示出根据本公开示例性实施例的位姿跟踪装置的框图。FIG. 3 shows a block diagram of a pose tracking apparatus according to an exemplary embodiment of the present disclosure.

参照图3,位姿跟踪装置包括图像获取单元31、像素获取单元32和姿态计算单元33。Referring to FIG. 3 , the posture tracking device includes an image acquisition unit 31 , a pixel acquisition unit 32 and a posture calculation unit 33 .

图像获取单元31被配置为获取跟踪对象的图像,其中,跟踪对象上设置有以特定频率闪烁的标记。The image acquisition unit 31 is configured to acquire an image of the tracked object, wherein the tracked object is provided with a marker that blinks at a specific frequency.

像素获取单元32被配置为从获取的图像中获取亮度变化的像素。The pixel acquisition unit 32 is configured to acquire pixels of varying brightness from the acquired image.

姿态计算单元33被配置为基于获取的像素计算跟踪对象的6自由度姿态。The pose calculation unit 33 is configured to calculate the 6-DOF pose of the tracked object based on the acquired pixels.

在本公开的示例性实施例中,姿态计算单元33可被配置为:获取跟踪对象的惯性测量单元数据,基于获取的惯性测量单元数据估计跟踪对象的3自由度姿态,这里,3自由度姿态是在跟踪对象的本体坐标系下绕x、y、z三个坐标轴旋转的姿态;基于3自由度姿态和获取的像素计算跟踪对象的6自由度姿态,这里,6自由度姿态是在跟踪对象的本体坐标系下沿x、y、z三个坐标轴方向的姿态和绕x、y、z三个直角坐标轴旋转的姿态。In an exemplary embodiment of the present disclosure, the attitude calculation unit 33 may be configured to: acquire inertial measurement unit data of the tracked object, and estimate the 3-DOF pose of the tracked object based on the acquired inertial measurement unit data, here, the 3-DOF pose It is the posture of rotating around the three coordinate axes of x, y, and z in the body coordinate system of the tracking object; based on the 3-DOF posture and the acquired pixels, the 6-DOF posture of the tracking object is calculated. Here, the 6-DOF posture is tracking The posture of the object along the three coordinate axes of x, y, and z in the body coordinate system of the object and the posture of rotating around the three rectangular coordinate axes of x, y, and z.

在本公开的示例性实施例中,姿态计算单元33还可被配置为:基于3自由度姿态和获取的像素,求解标记的2D点集与3D点集的对应关系,得到标记的关于2D点集与3D点集的匹配对,这里,2D点集中包括各个标记的像素坐标,3D点集中包括各个标记在跟踪对象的本体坐标系下的坐标;基于匹配对,计算跟踪对象的6自由度姿态。In an exemplary embodiment of the present disclosure, the pose calculation unit 33 may be further configured to: based on the 3-DOF pose and the acquired pixels, solve the correspondence between the marked 2D point set and the 3D point set, and obtain the marked 2D point The matching pair between the set and the 3D point set. Here, the 2D point set includes the pixel coordinates of each marker, and the 3D point set includes the coordinates of each marker in the body coordinate system of the tracked object; based on the matching pair, calculate the 6-DOF pose of the tracked object .

在本公开的示例性实施例中,姿态计算单元33还可被配置为:从匹配对中去除重投影偏差超过预设偏差阈值的像素,并根据去除后剩余的像素计算6自由度位姿;对计算得到的6自由度位姿进行最小化重投影误差操作,得到跟踪对象的6自由度姿态。In an exemplary embodiment of the present disclosure, the pose calculation unit 33 may be further configured to: remove the pixels whose reprojection deviation exceeds a preset deviation threshold from the matching pair, and calculate the 6-DOF pose according to the pixels remaining after the removal; Minimize the reprojection error of the calculated 6-DOF pose to obtain the 6-DOF pose of the tracked object.

在本公开的示例性实施例中,姿态计算单元33还可被配置为:从匹配对中去除重投影偏差超过预设偏差阈值的像素,并根据去除后剩余的像素计算6自由度位姿;对计算得到的6自由度位姿进行最小化重投影误差操作;根据3自由度姿态对最小化重投影误差操作后的6自由度位姿进行优化,得到跟踪对象的6自由度姿态。In an exemplary embodiment of the present disclosure, the pose calculation unit 33 may be further configured to: remove the pixels whose reprojection deviation exceeds a preset deviation threshold from the matching pair, and calculate the 6-DOF pose according to the pixels remaining after the removal; The calculated 6-DOF pose is performed to minimize the reprojection error; the 6-DOF pose after minimizing the re-projection error is optimized according to the 3-DOF pose, and the 6-DOF pose of the tracked object is obtained.

在本公开的示例性实施例中,位姿跟踪装置还可包括:重新匹配单元,被配置为在得到所述跟踪对象的6自由度姿态之后,根据跟踪对象的6自由度姿态对重投影误差超过预设偏差阈值的像素进行重新匹配。In an exemplary embodiment of the present disclosure, the pose tracking apparatus may further include: a re-matching unit configured to, after obtaining the 6-DOF pose of the tracked object, reproject the error according to the 6-DOF pose of the tracked object Pixels that exceed a preset deviation threshold are rematched.

图4示出根据本公开示例性实施例的电子装置的示意图。FIG. 4 shows a schematic diagram of an electronic device according to an exemplary embodiment of the present disclosure.

参照图4,电子装置4包括:相机41和处理器42。Referring to FIG. 4 , the electronic device 4 includes a camera 41 and a processor 42 .

其中,相机41用于获取跟踪对象的图像,并且从获取的图像中获取亮度变化的像素,其中,跟踪对象上设置有以特定频率闪烁的标记。处理器42用于基于相机获取的像素计算跟踪对象的6自由度姿态。Wherein, the camera 41 is used to acquire an image of the tracking object, and acquires pixels whose brightness changes from the acquired image, wherein the tracking object is provided with a mark that flickers with a specific frequency. The processor 42 is configured to calculate the 6-DOF pose of the tracked object based on the pixels acquired by the camera.

在本公开的示例性实施例中,处理器42可用于获取跟踪对象的惯性测量单元数据,基于获取的惯性测量单元数据估计跟踪对象的3自由度姿态,这里,3自由度姿态是在跟踪对象的本体坐标系下绕x、y、z三个坐标轴旋转的姿态;基于3自由度姿态和获取的像素计算跟踪对象的6自由度姿态,这里,6自由度姿态是在跟踪对象的本体坐标系下沿x、y、z三个坐标轴方向的姿态和绕x、y、z三个直角坐标轴旋转的姿态。In an exemplary embodiment of the present disclosure, the processor 42 may be configured to acquire inertial measurement unit data of the tracked object, and to estimate the 3-DOF pose of the tracked object based on the acquired inertial measurement unit data, where the 3-DOF pose is the tracking object The posture of rotating around the three coordinate axes of x, y, and z in the body coordinate system of the The attitude of the system along the three coordinate axes of x, y, and z and the attitude of rotation around the three rectangular coordinate axes of x, y, and z.

在本公开的示例性实施例中,处理器42可用于基于3自由度姿态和获取的像素,求解标记的2D点集与3D点集的对应关系,得到标记的关于2D点集与3D点集的匹配对,这里,2D点集中包括各个标记的像素坐标,3D点集中包括各个标记在跟踪对象的本体坐标系下的坐标;基于匹配对,计算跟踪对象的6自由度姿态。In an exemplary embodiment of the present disclosure, the processor 42 may be configured to solve the correspondence between the marked 2D point set and the 3D point set based on the 3-DOF pose and the acquired pixels, and obtain the marked relative 2D point set and 3D point set Here, the 2D point set includes the pixel coordinates of each marker, and the 3D point set includes the coordinates of each marker in the body coordinate system of the tracked object; based on the matching pairs, the 6-DOF pose of the tracked object is calculated.

在本公开的示例性实施例中,处理器42可用于从匹配对中去除重投影偏差超过预设偏差阈值的像素,并根据去除后剩余的像素计算6自由度位姿;对计算得到的6自由度位姿进行最小化重投影误差操作,得到跟踪对象的6自由度姿态。In an exemplary embodiment of the present disclosure, the processor 42 may be configured to remove pixels whose reprojection deviation exceeds a preset deviation threshold from the matching pair, and calculate a 6-DOF pose according to the remaining pixels after removal; The DOF pose is performed to minimize the reprojection error, and the 6 DOF pose of the tracked object is obtained.

在本公开的示例性实施例中,处理器42可用于从匹配对中去除重投影偏差超过预设偏差阈值的像素,并根据去除后剩余的像素计算6自由度位姿;对计算得到的6自由度位姿进行最小化重投影误差操作;根据3自由度姿态对最小化重投影误差操作后的6自由度位姿进行优化,得到跟踪对象的6自由度姿态。In an exemplary embodiment of the present disclosure, the processor 42 may be configured to remove pixels whose reprojection deviation exceeds a preset deviation threshold from the matching pair, and calculate a 6-DOF pose according to the remaining pixels after removal; The DOF pose is operated to minimize the reprojection error; the 6DOF pose after minimizing the reprojection error is optimized according to the 3DOF pose, and the 6DOF pose of the tracked object is obtained.

在本公开的示例性实施例中,处理器42可用于在得到所述跟踪对象的6自由度姿态之后,根据跟踪对象的6自由度姿态对重投影误差超过预设偏差阈值的像素进行重新匹配。In an exemplary embodiment of the present disclosure, the processor 42 may be configured to re-match pixels whose reprojection error exceeds a preset deviation threshold according to the 6-DOF pose of the tracked object after obtaining the 6-DOF pose of the tracked object .

此外,根据本公开的示例性实施例,还提供一种计算机可读存储介质,其上存储有计算机程序,当所述计算机程序被执行时,实现根据本公开的示例性实施例的位姿跟踪方法。In addition, according to an exemplary embodiment of the present disclosure, there is also provided a computer-readable storage medium on which a computer program is stored, and when the computer program is executed, implements the pose tracking according to the exemplary embodiment of the present disclosure method.

作为示例,所述计算机可读存储介质可承载有一个或者多个程序,当所述计算机程序被执行时可实现以下步骤:获取跟踪对象的图像,其中,跟踪对象上设置有以特定频率闪烁的标记;从获取的图像中获取亮度变化的像素;基于获取的像素计算跟踪对象的6自由度姿态。As an example, the computer-readable storage medium may carry one or more programs, and when the computer program is executed, the following steps may be implemented: acquiring an image of a tracking object, wherein the tracking object is provided with flashing lights with a specific frequency. mark; obtain pixels with varying brightness from the acquired image; calculate the 6DOF pose of the tracked object based on the acquired pixels.

计算机可读存储介质例如可以是,但不限于,电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开的实施例中,计算机可读存储介质可以是任何包含或存储计算机程序的有形介质,该计算机程序可以被指令执行系统、装置或者器件使用或者与其结合使用。计算机可读存储介质上包含的计算机程序可以用任何适当的介质传输,包括但不限于:电线、光缆、RF(射频)等等,或者上述的任意合适的组合。计算机可读存储介质可以包含在任意装置中;也可以单独存在,而未装配入该装置中。The computer readable storage medium can be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or any combination of the above. More specific examples of computer readable storage media may include, but are not limited to, electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable Programmable read only memory (EPROM or flash memory), fiber optics, portable compact disk read only memory (CD-ROM), optical storage devices, magnetic storage devices, or any suitable combination of the foregoing. In embodiments of the present disclosure, a computer-readable storage medium may be any tangible medium that contains or stores a computer program that can be used by or in conjunction with an instruction execution system, apparatus, or device. A computer program embodied on a computer-readable storage medium may be transmitted using any suitable medium including, but not limited to, electrical wire, optical fiber cable, RF (radio frequency), etc., or any suitable combination of the foregoing. The computer-readable storage medium may be included in any apparatus; it may also exist alone without being incorporated into the apparatus.

以上已经结合图3和图4对根据本公开示例性实施例的位姿跟踪装置和电子装置进行了描述。接下来,结合图5对根据本公开的示例性实施例的计算装置进行描述。The pose tracking device and the electronic device according to the exemplary embodiments of the present disclosure have been described above with reference to FIGS. 3 and 4 . Next, a computing device according to an exemplary embodiment of the present disclosure will be described with reference to FIG. 5 .

图5示出根据本公开示例性实施例的计算装置的示意图。5 shows a schematic diagram of a computing device according to an exemplary embodiment of the present disclosure.

参照图5,根据本公开示例性实施例的计算装置5,包括存储器51和处理器52,所述存储器51上存储有计算机程序,当所述计算机程序被处理器52执行时,实现根据本公开的示例性实施例的位姿跟踪方法。Referring to FIG. 5 , a computing device 5 according to an exemplary embodiment of the present disclosure includes a memory 51 and a processor 52 , the memory 51 stores a computer program, and when the computer program is executed by the processor 52 , realizes the implementation according to the present disclosure Pose tracking method of an exemplary embodiment of .

作为示例,当所述计算机程序被处理器52执行时,可实现以下步骤:获取跟踪对象的图像,其中,跟踪对象上设置有以特定频率闪烁的标记;从获取的图像中获取亮度变化的像素;基于获取的像素计算跟踪对象的6自由度姿态。As an example, when the computer program is executed by the processor 52, the following steps can be implemented: acquiring an image of the tracking object, wherein the tracking object is provided with a mark that flickers at a specific frequency; acquiring pixels with varying brightness from the acquired image ; Calculate the 6DOF pose of the tracked object based on the acquired pixels.

图5示出的计算装置仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。The computing device shown in FIG. 5 is only an example, and should not impose any limitation on the function and scope of use of the embodiments of the present disclosure.

以上已参照图1至图5描述了根据本公开示例性实施例的位姿跟踪方法及装置。然而,应该理解的是:图3中所示的位姿跟踪装置及其单元可分别被配置为执行特定功能的软件、硬件、固件或上述项的任意组合,图5中所示的计算装置并不限于包括以上示出的组件,而是可根据需要增加或删除一些组件,并且以上组件也可被组合。The pose tracking method and apparatus according to the exemplary embodiments of the present disclosure have been described above with reference to FIGS. 1 to 5 . However, it should be understood that the pose tracking device and its unit shown in FIG. 3 may be configured as software, hardware, firmware or any combination of the above to perform specific functions, respectively, and the computing device shown in FIG. It is not limited to include the components shown above, but some components may be added or deleted as needed, and the above components may also be combined.

根据本公开的示例性实施例的位姿跟踪方法及装置,通过获取跟踪对象的图像,从获取的图像中获取亮度变化的像素,基于获取的像素计算跟踪对象的6自由度姿态,从而降低了位姿跟踪对LED标记特定布局的依赖性,同时降低了位姿跟踪的延迟,并且提高了位姿跟踪的精度和效率。According to the pose tracking method and device according to the exemplary embodiments of the present disclosure, by acquiring an image of the tracking object, acquiring pixels with brightness changes from the acquired image, and calculating the 6-DOF pose of the tracking object based on the acquired pixels, thereby reducing the The dependence of pose tracking on the specific layout of LED markers, while reducing the latency of pose tracking, and improving the accuracy and efficiency of pose tracking.

尽管已经参照其示例性实施例具体显示和描述了本公开,但是本领域的技术人员应该理解,在不脱离权利要求所限定的本公开的精神和范围的情况下,可以对其进行形式和细节上的各种改变。Although the present disclosure has been particularly shown and described with reference to exemplary embodiments thereof, it will be understood by those skilled in the art that form and detail may be made therein without departing from the spirit and scope of the present disclosure as defined by the following claims various changes on.

Claims (10)

1. A pose tracking method, comprising:
acquiring an image of a tracking object, wherein a mark which flickers at a specific frequency is arranged on the tracking object;
acquiring pixels with changed brightness from the acquired image;
a 6 degree of freedom pose of the tracked object is calculated based on the acquired pixels.
2. The pose tracking method of claim 1, wherein the step of calculating a 6 degree of freedom pose of the tracked object based on the acquired pixels comprises:
acquiring inertial measurement unit data of the tracked object, and estimating a 3-degree-of-freedom attitude of the tracked object based on the acquired inertial measurement unit data, wherein the 3-degree-of-freedom attitude is an attitude rotating around three coordinate axes of x, y and z under a body coordinate system of the tracked object;
and calculating a 6-degree-of-freedom attitude of the tracked object based on the 3-degree-of-freedom attitude and the acquired pixels, wherein the 6-degree-of-freedom attitude is an attitude along three coordinate axis directions of x, y and z and an attitude rotating around three rectangular coordinate axes of x, y and z under a body coordinate system of the tracked object.
3. The pose tracking method of claim 2, wherein the step of calculating a 6 degree of freedom pose of the tracked object based on the 3 degree of freedom pose and the acquired pixels comprises:
based on the 3-degree-of-freedom posture and the obtained pixels, solving a corresponding relation between a 2D point set and a 3D point set of the marker to obtain a matching pair of the 2D point set and the 3D point set of the marker, wherein the 2D point set comprises pixel coordinates of the marker, and the 3D point set comprises coordinates of the marker in a body coordinate system of the tracked object;
based on the matching pairs, a 6 degree of freedom pose of the tracked object is calculated.
4. The pose tracking method of claim 3, wherein the step of calculating a 6 degree of freedom pose of the tracked object comprises:
removing pixels with the reprojection deviation exceeding a preset deviation threshold value from the matching pair, and calculating a pose with 6 degrees of freedom according to the remaining pixels after removal;
and carrying out minimum reprojection error operation on the calculated pose with 6 degrees of freedom to obtain the pose with 6 degrees of freedom of the tracked object.
5. The pose tracking method of claim 3, wherein the step of calculating a 6 degree of freedom pose of the tracked object comprises:
removing pixels with the reprojection deviation exceeding a preset deviation threshold value from the matching pair, and calculating a pose with 6 degrees of freedom according to the remaining pixels after removal;
performing minimum reprojection error operation on the pose with 6 degrees of freedom obtained by calculation;
and optimizing the pose of 6 degrees of freedom after the operation of minimizing the reprojection error according to the pose of 3 degrees of freedom to obtain the pose of 6 degrees of freedom of the tracked object.
6. The pose tracking method according to claim 4 or 5, further comprising, after obtaining the 6 degree-of-freedom pose of the tracked object:
and carrying out re-matching on the pixels of which the re-projection errors in the matching pairs exceed a preset deviation threshold according to the 6-degree-of-freedom posture of the tracked object.
7. A pose tracking apparatus, comprising:
an image acquisition unit configured to acquire an image of a tracking object on which a marker that blinks at a specific frequency is provided;
a pixel acquisition unit configured to acquire a pixel of which luminance varies from the acquired image; and
a pose calculation unit configured to calculate a 6-degree-of-freedom pose of the tracking object based on the acquired pixels.
8. An electronic device, comprising:
a camera for acquiring an image of a tracking object on which a marker blinking at a specific frequency is provided, and acquiring a pixel of which luminance varies from the acquired image;
a processor to calculate a 6 degree of freedom pose of the tracked object based on the pixels acquired by the camera.
9. A computer-readable storage medium storing a computer program, wherein the computer program, when executed by a processor, implements the pose tracking method of any one of claims 1 to 6.
10. A computing device, comprising:
a processor;
a memory storing a computer program that, when executed by the processor, implements the pose tracking method of any one of claims 1 to 6.
CN201910950626.1A 2019-10-08 2019-10-08 Pose tracking method and device Active CN110782492B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN201910950626.1A CN110782492B (en) 2019-10-08 2019-10-08 Pose tracking method and device
KR1020200114552A KR20210042011A (en) 2019-10-08 2020-09-08 Posture tracking method and apparatus performing the same
US17/063,909 US11610330B2 (en) 2019-10-08 2020-10-06 Method and apparatus with pose tracking

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910950626.1A CN110782492B (en) 2019-10-08 2019-10-08 Pose tracking method and device

Publications (2)

Publication Number Publication Date
CN110782492A true CN110782492A (en) 2020-02-11
CN110782492B CN110782492B (en) 2023-03-28

Family

ID=69384884

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910950626.1A Active CN110782492B (en) 2019-10-08 2019-10-08 Pose tracking method and device

Country Status (2)

Country Link
KR (1) KR20210042011A (en)
CN (1) CN110782492B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111462179A (en) * 2020-03-26 2020-07-28 北京百度网讯科技有限公司 Three-dimensional object tracking method and device and electronic equipment
CN111949123A (en) * 2020-07-01 2020-11-17 青岛小鸟看看科技有限公司 Hybrid tracking method and device for multi-sensor handle controller
CN112306271A (en) * 2020-10-30 2021-02-02 歌尔光学科技有限公司 A focus calibration method, device and related equipment of a handle controller
CN112991556A (en) * 2021-05-12 2021-06-18 航天宏图信息技术股份有限公司 AR data display method and device, electronic equipment and storage medium
CN113370217A (en) * 2021-06-29 2021-09-10 华南理工大学 Method for recognizing and grabbing object posture based on deep learning for intelligent robot
CN114170308A (en) * 2021-11-18 2022-03-11 上海鱼微阿科技有限公司 All-in-one machine pose true value calculating method and device, electronic equipment and storage medium
CN114565669A (en) * 2021-12-14 2022-05-31 华人运通(上海)自动驾驶科技有限公司 A field-side multi-camera fusion localization method
TWI812369B (en) * 2021-07-28 2023-08-11 宏達國際電子股份有限公司 Control method, tracking system and non-transitory computer-readable storage medium

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114549578B (en) * 2021-11-05 2025-01-17 北京小米移动软件有限公司 Target tracking method, device and storage medium

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103607541A (en) * 2013-12-02 2014-02-26 吴东辉 Method and system for obtaining information by way of camera shooting, camera shooting device and information modulation device
CN103930944A (en) * 2011-06-23 2014-07-16 奥布隆工业有限公司 Adaptive tracking system for spatial input devices
CN104769454A (en) * 2012-10-31 2015-07-08 莱卡地球系统公开股份有限公司 Method and device for determining the orientation of an object
CN105844659A (en) * 2015-01-14 2016-08-10 北京三星通信技术研究有限公司 Moving part tracking method and device
US20160247293A1 (en) * 2015-02-24 2016-08-25 Brain Biosciences, Inc. Medical imaging systems and methods for performing motion-corrected image reconstruction
CN106068533A (en) * 2013-10-14 2016-11-02 瑞尔D股份有限公司 The control of directional display
CN108074262A (en) * 2016-11-15 2018-05-25 卡尔蔡司工业测量技术有限公司 For determining the method and system of the six-degree-of-freedom posture of object in space
CN108596980A (en) * 2018-03-29 2018-09-28 中国人民解放军63920部队 Circular target vision positioning precision assessment method, device, storage medium and processing equipment
CN108648215A (en) * 2018-06-22 2018-10-12 南京邮电大学 SLAM motion blur posture tracking algorithms based on IMU
US20180308240A1 (en) * 2013-11-18 2018-10-25 Pixmap Method for estimating the speed of movement of a camera
CN109298629A (en) * 2017-07-24 2019-02-01 来福机器人 For providing the fault-tolerant of robust tracking to realize from non-autonomous position of advocating peace
CN109474817A (en) * 2017-09-06 2019-03-15 原相科技股份有限公司 Optical sensing device, method and optical detection module
WO2019066476A1 (en) * 2017-09-28 2019-04-04 Samsung Electronics Co., Ltd. Camera pose and plane estimation using active markers and a dynamic vision sensor
CN110036258A (en) * 2016-12-08 2019-07-19 索尼互动娱乐股份有限公司 Information processing unit and information processing method
CN110120099A (en) * 2018-02-06 2019-08-13 广东虚拟现实科技有限公司 Localization method, device, recognition and tracking system and computer-readable medium

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103930944A (en) * 2011-06-23 2014-07-16 奥布隆工业有限公司 Adaptive tracking system for spatial input devices
CN104769454A (en) * 2012-10-31 2015-07-08 莱卡地球系统公开股份有限公司 Method and device for determining the orientation of an object
CN106068533A (en) * 2013-10-14 2016-11-02 瑞尔D股份有限公司 The control of directional display
US20180308240A1 (en) * 2013-11-18 2018-10-25 Pixmap Method for estimating the speed of movement of a camera
CN103607541A (en) * 2013-12-02 2014-02-26 吴东辉 Method and system for obtaining information by way of camera shooting, camera shooting device and information modulation device
CN105844659A (en) * 2015-01-14 2016-08-10 北京三星通信技术研究有限公司 Moving part tracking method and device
US20160247293A1 (en) * 2015-02-24 2016-08-25 Brain Biosciences, Inc. Medical imaging systems and methods for performing motion-corrected image reconstruction
CN108074262A (en) * 2016-11-15 2018-05-25 卡尔蔡司工业测量技术有限公司 For determining the method and system of the six-degree-of-freedom posture of object in space
CN110036258A (en) * 2016-12-08 2019-07-19 索尼互动娱乐股份有限公司 Information processing unit and information processing method
CN109298629A (en) * 2017-07-24 2019-02-01 来福机器人 For providing the fault-tolerant of robust tracking to realize from non-autonomous position of advocating peace
CN109474817A (en) * 2017-09-06 2019-03-15 原相科技股份有限公司 Optical sensing device, method and optical detection module
WO2019066476A1 (en) * 2017-09-28 2019-04-04 Samsung Electronics Co., Ltd. Camera pose and plane estimation using active markers and a dynamic vision sensor
CN110120099A (en) * 2018-02-06 2019-08-13 广东虚拟现实科技有限公司 Localization method, device, recognition and tracking system and computer-readable medium
CN108596980A (en) * 2018-03-29 2018-09-28 中国人民解放军63920部队 Circular target vision positioning precision assessment method, device, storage medium and processing equipment
CN108648215A (en) * 2018-06-22 2018-10-12 南京邮电大学 SLAM motion blur posture tracking algorithms based on IMU

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
PAVEL A. SAVKIN 等: "Outside-in monocular IR camera based HMD pose estimation via geometric optimization", 《THE 23RD ACM SYMPOSIUM ON VIRTUAL REALITY SOFTWARE AND TECHNOLOGY》 *
潘京生 等: "适用于昼夜视觉的微光CIS", 《红外技术》 *

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111462179A (en) * 2020-03-26 2020-07-28 北京百度网讯科技有限公司 Three-dimensional object tracking method and device and electronic equipment
CN111462179B (en) * 2020-03-26 2023-06-27 北京百度网讯科技有限公司 Three-dimensional object tracking method and device and electronic equipment
CN111949123A (en) * 2020-07-01 2020-11-17 青岛小鸟看看科技有限公司 Hybrid tracking method and device for multi-sensor handle controller
US12008173B2 (en) 2020-07-01 2024-06-11 Qingdao Pico Technology Co., Ltd. Multi-sensor handle controller hybrid tracking method and device
WO2022002132A1 (en) * 2020-07-01 2022-01-06 青岛小鸟看看科技有限公司 Multi-sensor handle controller hybrid tracking method and device
CN111949123B (en) * 2020-07-01 2023-08-08 青岛小鸟看看科技有限公司 Multi-sensor handle controller hybrid tracking method and device
CN112306271B (en) * 2020-10-30 2022-11-25 歌尔光学科技有限公司 Focus calibration method, device and related equipment for handle controller
CN112306271A (en) * 2020-10-30 2021-02-02 歌尔光学科技有限公司 A focus calibration method, device and related equipment of a handle controller
CN112991556A (en) * 2021-05-12 2021-06-18 航天宏图信息技术股份有限公司 AR data display method and device, electronic equipment and storage medium
CN112991556B (en) * 2021-05-12 2022-05-27 航天宏图信息技术股份有限公司 AR data display method and device, electronic equipment and storage medium
CN113370217B (en) * 2021-06-29 2023-06-16 华南理工大学 Object gesture recognition and grabbing intelligent robot method based on deep learning
CN113370217A (en) * 2021-06-29 2021-09-10 华南理工大学 Method for recognizing and grabbing object posture based on deep learning for intelligent robot
TWI812369B (en) * 2021-07-28 2023-08-11 宏達國際電子股份有限公司 Control method, tracking system and non-transitory computer-readable storage medium
CN114170308A (en) * 2021-11-18 2022-03-11 上海鱼微阿科技有限公司 All-in-one machine pose true value calculating method and device, electronic equipment and storage medium
CN114565669A (en) * 2021-12-14 2022-05-31 华人运通(上海)自动驾驶科技有限公司 A field-side multi-camera fusion localization method

Also Published As

Publication number Publication date
CN110782492B (en) 2023-03-28
KR20210042011A (en) 2021-04-16

Similar Documents

Publication Publication Date Title
CN110782492B (en) Pose tracking method and device
JP7057454B2 (en) Improved camera calibration system, target, and process
US10818092B2 (en) Robust optical disambiguation and tracking of two or more hand-held controllers with passive optical and inertial tracking
CN112527102B (en) Head-mounted all-in-one machine system and 6DoF tracking method and device thereof
CN109313497B (en) Modular extension of inertial controller for six-degree-of-freedom mixed reality input
Labbé et al. Single-view robot pose and joint angle estimation via render & compare
US9207773B1 (en) Two-dimensional method and system enabling three-dimensional user interaction with a device
US20140168261A1 (en) Direct interaction system mixed reality environments
JP2018522348A (en) Method and system for estimating the three-dimensional posture of a sensor
CN108022264B (en) Method and equipment for determining camera pose
CN108028871A (en) The more object augmented realities of unmarked multi-user in mobile equipment
JP2018511098A (en) Mixed reality system
CN109961523B (en) Method, device, system, equipment and storage medium for updating virtual target
CN110120099A (en) Localization method, device, recognition and tracking system and computer-readable medium
CN108257177B (en) Positioning system and method based on space identification
CN110119190A (en) Localization method, device, recognition and tracking system and computer-readable medium
JP6129363B2 (en) Interactive system, remote control and operation method thereof
WO2024169384A1 (en) Gaze estimation method and apparatus, and readable storage medium and electronic device
WO2024061238A1 (en) Method for estimating pose of handle, and virtual display device
WO2024144261A1 (en) Method and electronic device for extended reality
CN114004880A (en) A real-time localization method of point cloud and strong reflective target for binocular camera
CN111862170A (en) Optical motion capture system and method
CN111752386B (en) Space positioning method, system and head-mounted equipment
CN108803861B (en) Interaction method, equipment and system
JP7288792B2 (en) Information processing device and device information derivation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant