[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN114638890A - Method, apparatus and storage medium for predicting rigid body mark point position - Google Patents

Method, apparatus and storage medium for predicting rigid body mark point position Download PDF

Info

Publication number
CN114638890A
CN114638890A CN202111676355.9A CN202111676355A CN114638890A CN 114638890 A CN114638890 A CN 114638890A CN 202111676355 A CN202111676355 A CN 202111676355A CN 114638890 A CN114638890 A CN 114638890A
Authority
CN
China
Prior art keywords
frame
position data
mark point
predicted
mark
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111676355.9A
Other languages
Chinese (zh)
Inventor
黄少光
许秋子
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Realis Multimedia Technology Co Ltd
Original Assignee
Shenzhen Realis Multimedia Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Realis Multimedia Technology Co Ltd filed Critical Shenzhen Realis Multimedia Technology Co Ltd
Priority to CN202111676355.9A priority Critical patent/CN114638890A/en
Publication of CN114638890A publication Critical patent/CN114638890A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for predicting rigid body mark point position, which comprises the steps of acquiring a tracking mark data set and a position data set of mark points on each frame of human body limb, and if the human body limb has more than 3 mark points and at least 2 mark points can be correctly tracked, determining whether the limb P is a normal limbsThe ith frame position data of the 0 th mark point
Figure DDA0003451448990000011
Traceable/visible 1 st mark point ith frame position data
Figure DDA0003451448990000012
Ith frame position data of 2 nd mark point to be predicted/invisible
Figure DDA0003451448990000013
The method comprises the steps of tracking/seeing, judging whether the mark point meets a preset condition, and if not, failing to predict the ith frame position data of the mark point to be predicted/not seen;if the position data of the ith frame of the 1 st mark point to be predicted/invisible is consistent with the position data of the 0 th traceable/visible mark point and the 2 nd traceable/visible mark point, the position data of the ith frame of the 1 st mark point to be predicted/invisible is predicted according to the scale factor, and therefore the lost position data is successfully predicted.

Description

Method, apparatus and storage medium for predicting rigid body mark point position
Technical Field
The invention relates to the technical field of motion capture, in particular to a method, equipment and a storage medium for predicting positions of rigid body mark points.
Background
The optical motion capture system can be divided into two types of motion capture systems with mark points and motion capture systems without mark points, and the motion capture systems with mark points are widely adopted due to the characteristics of strong real-time performance, high precision, strong safety and the like. In the current optical motion capture system, human motion capture is generally performed by tracking and calculating a body part through rigid body mark points worn on a human body, so as to deduce information such as human motion, and therefore, the accuracy of the mark point tracking and positioning is very critical.
However, the original marking points acquired by the system are scattered point clouds, no fixed sequence exists among the marking points, and the marking points are lost and noisy due to the phenomena of blocking and sliding of a human body in the motion process. The real-time matching of the mark points in the dynamic capturing process is a key technology based on optical human motion capturing, the accuracy of motion capturing is directly determined, and due to various factors of the dynamic capturing environment, the mark points are often shielded or easily lost due to violent motion, so that the mark points are failed to track or match, and therefore, a method for preventing the mark points from being shielded or lost is provided to improve the tracking success rate of the mark points.
Disclosure of Invention
In order to solve the technical problem, the application provides a method for predicting the position of a rigid body mark point to avoid the problem of failure in tracking the position of the mark point due to the reasons of shielding and losing the rigid body mark point under the condition of multi-rigid body and multi-person motion capture.
According to a first aspect, there is provided in an embodiment a method of predicting rigid body marker point positions, comprising the steps of:
s1, acquiring a tracking mark data set of mark points on each frame of human body limb
Figure RE-GDA0003647298140000011
And location data set
Figure RE-GDA0003647298140000012
Wherein i is a frameNumber, s is the limb part;
s2, when there are more than 3 markers and at least 2 markers can be correctly tracked on the limb, if the limb P issThe ith frame position data of the 0 th mark point
Figure RE-GDA0003647298140000013
Traceable/visible 1 st mark point ith frame position data
Figure RE-GDA0003647298140000014
Ith frame position data of 2 nd mark point to be predicted/invisible
Figure RE-GDA0003647298140000015
Tracking/seeing, judging whether the mark point accords with a first condition, if so, entering step S3, and if not, failing to predict the ith frame position data of the mark point to be predicted/not seen;
s3, calculating the distance between the 0 th traceable/visible mark point and the 2 nd traceable/visible mark point in the i-2 th frame and the i-1 th frame, judging whether the distance meets the second condition, if so, entering the step S4, and if not, failing to predict the position data of the i-th frame of the mark point to be predicted/invisible;
s4, calculating the scale factor of the i frame and the i-1 frame of the traceable/visible mark point, judging whether the scale factor is in the threshold range, if so, entering the step S5, and if not, failing to predict the position data of the i frame of the mark point to be predicted/invisible;
and S5, predicting the position data of the 1 st to-be-predicted/invisible marking point according to the 0 th traceable/visible marking point position data and the 2 nd traceable/visible marking point position data and the scale factors respectively.
The first condition includes:
tracking marker data at frame i-2 for 0 th and 2 nd trackable/visible marker points
Figure RE-GDA0003647298140000021
And
Figure RE-GDA0003647298140000022
tracking flag data at i-1 frame
Figure RE-GDA0003647298140000023
And
Figure RE-GDA0003647298140000024
all need to be true, and, for the 1 st mark point to be predicted/invisible, the tracking flag data at frame i-1
Figure RE-GDA0003647298140000025
Need to be true.
The second condition includes:
Figure RE-GDA0003647298140000026
wherein,
Figure RE-GDA0003647298140000027
for the distance between the 2 nd trackable/visible marker point at frame i-2 and frame i-1,
Figure RE-GDA0003647298140000028
d is a set threshold for the distance between the 0 th trackable/visible marker point at frame i-2 and frame i-1.
The judging whether the scale factor is within the threshold value range comprises: the calculation formula of the scale factor is as follows:
Figure RE-GDA0003647298140000029
wherein,
Figure RE-GDA00036472981400000210
it is required to be within a set threshold.
The step of predicting the position data of the 1 st to-be-predicted/invisible marking point according to the 0 th traceable/visible marking point position data and the 2 nd traceable/visible marking point position data and the scale factor respectively comprises the following steps:
combining the rotational transformation matrix R between the vectors formed by the 0 th and 2 nd traceable/visible mark points at the i-2 nd frame and the i-1 st frame, the position data
Figure RE-GDA00036472981400000211
And
Figure RE-GDA00036472981400000212
and a scale factor c, the position data of the 1 st mark point to be predicted/invisible can be predicted.
The rotation transformation matrix R is calculated by the following formula:
Figure RE-GDA00036472981400000213
Figure RE-GDA0003647298140000031
wherein the angle of rotation
Figure RE-GDA0003647298140000032
Rotating shaft
Figure RE-GDA0003647298140000033
Normalization of vectors
Figure RE-GDA0003647298140000034
Unit vector n of rotation axis is (nx, ny, nz), vector
Figure RE-GDA0003647298140000035
Or
Figure RE-GDA0003647298140000036
(Vector)
Figure RE-GDA0003647298140000037
Or
Figure RE-GDA0003647298140000038
The length of any vector formed between the mark points cannot be smaller than a preset threshold value.
The predicted position data of the 1 st ith frame to-be-predicted/invisible mark point is as follows:
Figure RE-GDA0003647298140000039
Figure RE-GDA00036472981400000310
wherein,
Figure RE-GDA00036472981400000311
Figure RE-GDA00036472981400000312
according to a second aspect, an embodiment provides an apparatus for predicting rigid body marker point locations, comprising: a memory having instructions stored therein and at least one processor, the memory and the at least one processor interconnected by a line; the at least one processor invokes the instructions in the memory to cause the apparatus to predict rigid body marker point locations to perform the method of the first aspect.
According to a third aspect, an embodiment provides a computer readable storage medium comprising a program executable by a processor to implement the method of the first aspect described above.
The beneficial effect of this application is:
according to the method for predicting the positions of the mark points of the rigid body in the embodiment, by acquiring the tracking mark data set and the position data set of the mark points on each frame of the human body limb, and when more than 3 mark points and at least 2 mark points on the human body limb can be correctly tracked, if the limb P can be correctly trackedsThe ith frame position data of the 0 th mark point
Figure RE-GDA00036472981400000314
Traceable/visible 1 st mark point ith frame position data
Figure RE-GDA00036472981400000313
Ith frame position data of 2 nd mark point to be predicted/invisible
Figure RE-GDA00036472981400000315
The method comprises the steps of tracking/seeing, judging whether the mark point meets a preset condition, and if not, failing to predict the ith frame position data of the mark point to be predicted/not seen; if the position data of the ith frame of the 1 st mark point to be predicted/invisible is consistent with the position data of the 0 th traceable/visible mark point and the 2 nd traceable/visible mark point, the position data of the ith frame of the 1 st mark point to be predicted/invisible is predicted according to the scale factor, and therefore the lost position data is successfully predicted.
Drawings
FIG. 1 is a schematic representation of a rigid body structure;
FIG. 2 is a flow chart of a method of predicting rigid body marker point locations;
FIG. 3 is a schematic diagram of an apparatus for predicting rigid body marker point positions.
Detailed Description
The present invention will be described in further detail with reference to the following detailed description and accompanying drawings. Wherein like elements in different embodiments are numbered with like associated elements. In the following description, numerous details are set forth in order to provide a better understanding of the present application. However, those skilled in the art will readily recognize that some of the features may be omitted or replaced with other elements, materials, methods in different instances. In some instances, certain operations related to the present application have not been shown or described in detail in order to avoid obscuring the core of the present application from excessive description, and it is not necessary for those skilled in the art to describe these operations in detail, so that they may be fully understood from the description in the specification and the general knowledge in the art.
Furthermore, the features, operations, or characteristics described in the specification may be combined in any suitable manner to form various embodiments. Also, the various steps or actions in the method descriptions may be transposed or transposed in order, as will be apparent to one of ordinary skill in the art. Thus, the various sequences in the specification and drawings are for the purpose of describing certain embodiments only and are not intended to imply a required sequence unless otherwise indicated where such sequence must be followed.
The numbering of the components as such, e.g., "second", etc., is used herein only to distinguish the objects as described, and does not have any sequential or technical meaning. The term "connected" and "coupled" when used in this application, unless otherwise indicated, includes both direct and indirect connections (couplings).
The inventive concept of the present application resides in: in the optical motion capture system, a plurality of rigid bodies with fixed shapes are required to be adopted for position tracking, the problems of similarity between the rigid bodies, shielding of the rigid bodies and the like are required to be avoided as much as possible, the method is used for further screening the mark points arranged on the rigid bodies so as to screen out proper mark points capable of improving the precision of the motion capture system, and subsequent matching calculation can be carried out according to the screened mark points so as to identify the corresponding rigid bodies.
The first embodiment is as follows:
referring to fig. 1, the present application discloses a structural diagram of a rigid body, which includes: the rigid body base 110, the supporting rods 120, the marking points 130 and the like are provided with at least 3 marking points, so that the accuracy of motion capture calculation can be ensured, the number of the marking points is the same as that of the supporting rods, and in order to facilitate the motion capture system to distinguish different marking points when calculating the coordinates of the marking points, the distances (such as M1, M2 and M3) between any two marking points are different as much as possible, that is, similar triangles are avoided as much as possible, so that the motion capture system can effectively identify the rigid body.
Example two:
referring to fig. 2, the present embodiment discloses a method for predicting rigid body mark point position information based on understanding the rigid body structure shown in fig. 1, and the claimed method includes steps S210-S250, which will be separately described below.
Step (ii) ofS210, acquiring a tracking mark data set of mark points on each frame of human body limb
Figure RE-GDA0003647298140000051
Figure RE-GDA0003647298140000052
And location data set
Figure RE-GDA0003647298140000053
Wherein i is the frame number, and s is the limb part;
Figure RE-GDA0003647298140000054
the position data of the ith frame of the mark point on the human body limb is shown,
Figure RE-GDA0003647298140000055
tracking mark data representing the ith frame of the mark point on the body limb, and
Figure RE-GDA0003647298140000056
and only two results are true and false, wherein true represents that the mark point is successfully tracked, and false represents that the mark point is failed to track, namely the mark point is lost.
Step S220, when there are more than 3 mark points on the human body and at least 2 mark points can be correctly tracked, if the body P issThe ith frame position data of the 0 th mark point
Figure RE-GDA0003647298140000057
Traceable/visible 1 st mark point ith frame position data
Figure RE-GDA0003647298140000058
Ith frame position data of 2 nd mark point to be predicted/invisible
Figure RE-GDA0003647298140000059
Traceable/visible, determining whether the mark points meet the first condition, if yes, entering step S230, if not, treating the ith frame bit of the mark points to be predicted/invisibleFailure of data prediction;
specifically, the first condition includes:
tracking marker data at frame i-2 for 0 th and 2 nd trackable/visible marker points
Figure RE-GDA00036472981400000510
And
Figure RE-GDA00036472981400000511
tracking flag data at i-1 frame
Figure RE-GDA00036472981400000512
And
Figure RE-GDA00036472981400000513
it is true, that is, the 0 th and 2 nd traceable/visible markers can be traced successfully at the i-2 nd frame and the i-1 st frame, and for the 1 st to-be-predicted/invisible marker, the tracing mark data at the i-1 st frame
Figure RE-GDA00036472981400000514
True, i.e. the 1 st mark point to be predicted/invisible in the ith frame for which tracking fails, and which can successfully track the corresponding position data in the (i-1) th frame
Figure RE-GDA00036472981400000515
If these conditions are met, the flow proceeds to step S230 to continue the prediction.
Step S230, calculating the distance between the 0 th traceable/visible mark point and the 2 nd traceable/visible mark point in the i-2 th frame and the i-1 th frame, judging whether the distance meets a second condition, if so, entering step S240, and if not, failing to predict the position data of the i-th frame of the mark point to be predicted/invisible;
specifically, the second condition includes:
Figure RE-GDA00036472981400000516
wherein,
Figure RE-GDA00036472981400000517
for the distance between the 2 nd trackable/visible marker point at frame i-2 and frame i-1,
Figure RE-GDA00036472981400000518
the distance between the 0 th trackable/visible marker at frame i-2 and frame i-1 is defined as d, which is a set threshold, typically the marker diameter.
For 2 traceable/visible mark points, the distance between the i-2 th frame and the i-1 th frame cannot be too small at the same time, so as to prevent extreme situations, such as when the traceable mark point rotates around two stationary traceable mark points, the failure probability of the algorithm for predicting the position of the mark point is high, if the second condition is not met at the same time, the prediction calculation is failed, otherwise, the step S240 is entered for continuing the prediction.
Step S240, calculating scale factors of the ith frame and the (i-1) th frame of the traceable/visible mark point, judging whether the scale factors are within a threshold range, if so, entering step S5, and if not, failing to predict the ith frame position data of the to-be-predicted/invisible mark point;
the scale factor is calculated by the formula:
Figure RE-GDA0003647298140000061
wherein,
Figure RE-GDA0003647298140000062
if the scale factor c is not too small, for example, not less than 0.02, it needs to be within a set threshold range, and thus the scale factor c calculated thereby also needs to be within a preset threshold range, otherwise, the calculation is ended, and if c is within the preset threshold range, the step S250 is proceeded to continue the prediction.
And S250, respectively predicting the position data of the 1 st to-be-predicted/invisible mark point according to the 0 th traceable/visible mark point position data and the 2 nd traceable/visible mark point position data and the scale factor.
Predicting the position data of the 1 st to-be-predicted/invisible mark point according to the 0 th traceable/visible mark point position data and the 2 nd traceable/visible mark point position data and the scale factor respectively specifically comprises the following steps:
combining the rotation transformation matrix R between vectors formed by the 0 th traceable/visible mark point and the 2 nd traceable/visible mark point at the i-2 th frame and the i-1 st frame, and the position data
Figure RE-GDA0003647298140000063
And
Figure RE-GDA0003647298140000064
and a scale factor c, the position data of the 1 st mark point to be predicted/invisible can be predicted.
The rotation transformation matrix R can be calculated by the following formula:
Figure RE-GDA0003647298140000065
Figure RE-GDA0003647298140000066
wherein the angle of rotation
Figure RE-GDA0003647298140000067
Rotating shaft
Figure RE-GDA0003647298140000068
Normalization of vectors
Figure RE-GDA0003647298140000069
The unit vector n of the rotation axis is (nx, ny, nz), vector
Figure RE-GDA00036472981400000610
Or
Figure RE-GDA00036472981400000611
(Vector)
Figure RE-GDA00036472981400000612
Or
Figure RE-GDA00036472981400000613
That is, different vectors v can be obtained through the 0 th traceable/visible mark point and the 2 nd traceable/visible mark point respectively1And v2Thus, two rotation transformation matrices R and R' are obtained. It should be noted that the length of any vector in the calculation process cannot be too small and cannot be smaller than a preset threshold, otherwise, the prediction calculation is considered to be failed.
Through the 0 th marking point
Figure RE-GDA00036472981400000614
The position data of the 1 st mark point missing in the ith frame is obtained by calculation as follows:
Figure RE-GDA0003647298140000071
also through the 2 nd mark point
Figure RE-GDA0003647298140000072
Calculating the missing position data of the 1 st mark point in the ith frame, and the other value is:
Figure RE-GDA0003647298140000073
the position data of the 1 st ith frame to-be-predicted/invisible mark point obtained by final prediction is as follows:
Figure RE-GDA0003647298140000074
in one embodiment, to further determine
Figure RE-GDA0003647298140000075
If the predicted position data satisfies a third condition, the success of the prediction calculation can be further determined, wherein the third condition includes: can be provided with
Figure RE-GDA0003647298140000076
Corresponding target value is
Figure RE-GDA0003647298140000077
Namely, the target value of the position data of the mark point to be predicted/invisible in the 1 st ith frame can be calculated by combining the motion speed of the mark point in the (i-1) th frame and the (i-2) th frame:
Figure RE-GDA0003647298140000078
then
Figure RE-GDA0003647298140000079
It needs to be less than a set threshold dt, otherwise the predictive computation fails.
If the prediction calculation fails in any of the steps S220 to S250, the prediction calculation of the mark point to be predicted is successful, and the method can be referred to for predicting the position data of other mark points on the limb that cannot be tracked except for the position data of the mark point to be predicted/invisible in the 1 st frame listed above.
Example three:
referring to fig. 3 of the drawings, a drawing,
further, the present application also describes in detail the apparatus for predicting rigid body mark point positions in this embodiment from the perspective of hardware processing.
Fig. 3 is a schematic structural diagram of an apparatus for predicting rigid body mark point positions according to this embodiment, where the apparatus 500 may have relatively large differences due to different configurations or performances, and may include one or more processors (CPUs) 510 (e.g., one or more processors) and a memory 520, and one or more storage media 530 (e.g., one or more mass storage devices) for storing applications 533 or data 532. Memory 520 and storage media 530 may be, among other things, transient or persistent storage. The program stored on the storage medium 530 may include one or more modules (not shown), each of which may include a sequence of instructions for operating on the device 500. Further, the processor 510 may be configured to communicate with the storage medium 530 to execute a series of instruction operations in the storage medium 530 on the device 500.
The device 500 may also include one or more power supplies 540, one or more wired or wireless network interfaces 550, one or more input-output interfaces 560, and/or one or more operating systems 531, such as Windows Server, Mac OS X, Unix, Linux, FreeBSD, and the like. Those skilled in the art will appreciate that the configuration of the apparatus 500 shown in fig. 3 does not constitute a limitation of the apparatus for predicting rigid body marker point locations provided herein, and may include more or fewer components than shown, or some components in combination, or a different arrangement of components.
The present application also provides a computer-readable storage medium, which may be a non-volatile computer-readable storage medium, and which may also be a volatile computer-readable storage medium, having stored therein instructions, which, when executed on a computer, cause the computer to perform the steps of the above-mentioned method for predicting rigid body marker point positions.
Those skilled in the art will appreciate that all or part of the functions of the methods in the above embodiments may be implemented by hardware, or may be implemented by a computer program. When all or part of the functions of the above embodiments are implemented by a computer program, the program may be stored in a computer-readable storage medium, and the storage medium may include: a read only memory, a random access memory, a magnetic disk, an optical disk, a hard disk, etc., and the program is executed by a computer to realize the above functions. For example, the program may be stored in a memory of the device, and when the program in the memory is executed by the processor, all or part of the functions described above may be implemented. In addition, when all or part of the functions in the above embodiments are implemented by a computer program, the program may be stored in a storage medium such as a server, another computer, a magnetic disk, an optical disk, a flash disk, or a portable hard disk, and may be downloaded or copied to a memory of a local device, or may be version-updated in a system of the local device, and when the program in the memory is executed by a processor, all or part of the functions in the above embodiments may be implemented.
The present invention has been described in terms of specific examples, which are provided to aid understanding of the invention and are not intended to be limiting. Numerous simple deductions, modifications or substitutions may also be made by those skilled in the art in light of the present teachings.

Claims (10)

1. The method for predicting the position of the mark point of the rigid body is characterized by comprising the following steps of:
s1, acquiring a tracking mark data set of mark points on each frame of human body limb
Figure FDA0003451448960000011
And location data set
Figure FDA0003451448960000012
Wherein i is the frame number, and s is the limb part;
s2, when there are more than 3 markers and at least 2 markers can be correctly tracked on the limb, if the limb P issThe ith frame position data of the 0 th mark point
Figure FDA0003451448960000013
Traceable/visible 1 st mark point ith frame position data
Figure FDA0003451448960000014
Ith frame position data of 2 nd mark point to be predicted/invisible
Figure FDA0003451448960000015
Tracking/seeing, judging whether the mark point accords with a first condition, if so, entering step S3, and if not, failing to predict the ith frame position data of the mark point to be predicted/not seen;
s3, calculating the distance between the 0 th traceable/visible mark point and the 2 nd traceable/visible mark point in the i-2 th frame and the i-1 th frame, judging whether the distance meets the second condition, if so, entering the step S4, and if not, failing to predict the position data of the i-th frame of the mark point to be predicted/invisible;
s4, calculating the scale factor of the i frame and the i-1 frame of the traceable/visible mark point, judging whether the scale factor is in the threshold range, if so, entering the step S5, and if not, failing to predict the position data of the i frame of the mark point to be predicted/invisible;
and S5, respectively obtaining the position data of the 1 st to-be-predicted/invisible mark point according to the 0 th traceable/visible mark point position data and the 2 nd traceable/visible mark point position data and the scale factor prediction.
2. The method of predicting rigid body marker point locations of claim 1, wherein the first condition comprises:
tracking marker data at frame i-2 for 0 th and 2 nd trackable/visible marker points
Figure FDA0003451448960000016
And
Figure FDA0003451448960000017
tracking flag data at i-1 frame
Figure FDA0003451448960000018
And
Figure FDA0003451448960000019
all need to be true, and, for the 1 st mark point to be predicted/invisible, the tracking flag data at frame i-1
Figure FDA00034514489600000110
Need to be true.
3. The method of predicting rigid body marker point locations of claim 1, wherein said second condition comprises:
Figure FDA00034514489600000111
wherein,
Figure FDA00034514489600000112
for the distance between the 2 nd trackable/visible marker point at frame i-2 and frame i-1,
Figure FDA00034514489600000113
d is a set threshold for the distance between the 0 th trackable/visible marker point at frame i-2 and frame i-1.
4. The method of predicting rigid body marker point locations as recited in claim 1, wherein said determining whether the scale factor is within a threshold comprises: the calculation formula of the scale factor is as follows:
Figure FDA0003451448960000021
wherein,
Figure FDA0003451448960000022
it is required to be within a set threshold.
5. A method of predicting rigid body marker point positions as described in claim 4, wherein said predicting position data for a 1 st to-be-predicted/invisible marker point based on 0 th and 2 nd trackable/visible marker point position data, respectively, and a scale factor comprises:
combining the rotational transformation matrix R between the vectors formed by the 0 th and 2 nd traceable/visible mark points at the i-2 nd frame and the i-1 st frame, the position data
Figure FDA0003451448960000023
And
Figure FDA0003451448960000024
and a scale factor c, the position data of the 1 st mark point to be predicted/invisible can be predicted.
6. A method of predicting rigid body marker point locations as defined in claim 5 wherein said rotational transformation matrix R is calculated by the formula:
Figure FDA0003451448960000025
Figure FDA0003451448960000026
wherein the angle of rotation
Figure FDA0003451448960000027
Rotating shaft
Figure FDA0003451448960000028
Normalization of vectors
Figure FDA0003451448960000029
Unit vector n of rotation axis is (nx, ny, nz), vector
Figure FDA00034514489600000210
Or
Figure FDA00034514489600000211
(Vector)
Figure FDA00034514489600000212
7. The method of predicting rigid body marker point positions according to claim 6, wherein the length of any vector formed between said marker points cannot be less than a preset threshold.
8. The method for predicting positions of rigid body marker points according to claim 7, wherein the predicted position data of the 1 st ith frame to-be-predicted/invisible marker point is as follows:
Figure FDA00034514489600000213
wherein,
Figure FDA00034514489600000214
9. apparatus for predicting rigid body marker point locations, comprising: a memory having instructions stored therein and at least one processor, the memory and the at least one processor interconnected by a line; the at least one processor invokes the instructions in the memory to cause the apparatus to predict rigid body marker point locations to perform the method of any of claims 1-8.
10. A computer-readable storage medium, characterized by comprising a program executable by a processor to implement the method of any one of claims 1-8.
CN202111676355.9A 2021-12-31 2021-12-31 Method, apparatus and storage medium for predicting rigid body mark point position Pending CN114638890A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111676355.9A CN114638890A (en) 2021-12-31 2021-12-31 Method, apparatus and storage medium for predicting rigid body mark point position

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111676355.9A CN114638890A (en) 2021-12-31 2021-12-31 Method, apparatus and storage medium for predicting rigid body mark point position

Publications (1)

Publication Number Publication Date
CN114638890A true CN114638890A (en) 2022-06-17

Family

ID=81945720

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111676355.9A Pending CN114638890A (en) 2021-12-31 2021-12-31 Method, apparatus and storage medium for predicting rigid body mark point position

Country Status (1)

Country Link
CN (1) CN114638890A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101803340B1 (en) * 2016-08-25 2017-12-29 서울대학교산학협력단 Visual odometry system and method
CN110689584A (en) * 2019-09-30 2020-01-14 深圳市瑞立视多媒体科技有限公司 Active rigid body pose positioning method in multi-camera environment and related equipment
CN111462089A (en) * 2020-04-01 2020-07-28 深圳市瑞立视多媒体科技有限公司 Virtual scene precision testing method based on optical dynamic capture system and related equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101803340B1 (en) * 2016-08-25 2017-12-29 서울대학교산학협력단 Visual odometry system and method
CN110689584A (en) * 2019-09-30 2020-01-14 深圳市瑞立视多媒体科技有限公司 Active rigid body pose positioning method in multi-camera environment and related equipment
CN111462089A (en) * 2020-04-01 2020-07-28 深圳市瑞立视多媒体科技有限公司 Virtual scene precision testing method based on optical dynamic capture system and related equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
吕青 等: "基于分块变化检测的人体肢体运动跟踪", 《大连民族学院学报》, no. 05, 15 September 2007 (2007-09-15), pages 45 - 48 *
徐兴贵: "近地面扩展目标远距成像识别关键技术研究", 《中国优秀博士学位论文全文数据库》, no. 07, 15 July 2020 (2020-07-15), pages 34 - 122 *

Similar Documents

Publication Publication Date Title
US11402509B2 (en) Method and system for use in performing localisation
CN112215955A (en) Rigid body mark point screening method, device, system, equipment and storage medium
CN115965657B (en) Target tracking method, electronic device, storage medium and vehicle
US20180173631A1 (en) Prefetch mechanisms with non-equal magnitude stride
US11403779B2 (en) Methods, apparatuses, systems, and storage media for loading visual localization maps
CN110969145A (en) Remote sensing image matching optimization method and device, electronic equipment and storage medium
CN110688873A (en) Multi-target tracking method and face recognition method
CN110555352B (en) Interest point identification method, device, server and storage medium
CN111832634A (en) Foreign matter detection method, system, terminal device and storage medium
CN114638890A (en) Method, apparatus and storage medium for predicting rigid body mark point position
CN114463383A (en) Method, equipment and storage medium for predicting rigid body mark point position
US10346716B2 (en) Fast joint template machining
CN117333686A (en) Target positioning method, device, equipment and medium
CN113112412A (en) Generation method and device of vertical correction matrix and computer readable storage medium
WO2008024353A2 (en) Finding blob-like structures using diverging gradient field response
CN114998755A (en) Method and device for matching landmarks in remote sensing image
KR102680209B1 (en) Apparatus for controlling posture of satellite antenna and method of operation thereof
CN118247591B (en) Dynamic mask eliminating method based on multiple geometric constraints
CN115587504B (en) Space target collision early warning method and device, electronic equipment and medium
CN111650621B (en) Method and device for calculating and detecting static drift precision, equipment and storage medium
KR102718674B1 (en) Apparatus for coordinate transformation for satellite antenna and method of operation thereof
CN114593751B (en) External parameter calibration method, device, medium and electronic equipment
CN118521807B (en) Mark point pairing method, medium and equipment
US20240087146A1 (en) Training apparatus, control method, and non-transitory computer-readable storagemedium
CN118982567A (en) Method for registering coal mine three-dimensional point cloud and removing fixed facilities in warehouse environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination