CN114463383A - Method, equipment and storage medium for predicting rigid body mark point position - Google Patents
Method, equipment and storage medium for predicting rigid body mark point position Download PDFInfo
- Publication number
- CN114463383A CN114463383A CN202111679549.4A CN202111679549A CN114463383A CN 114463383 A CN114463383 A CN 114463383A CN 202111679549 A CN202111679549 A CN 202111679549A CN 114463383 A CN114463383 A CN 114463383A
- Authority
- CN
- China
- Prior art keywords
- frame
- position data
- predicted
- mark point
- mark
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 35
- 239000003550 marker Substances 0.000 claims description 35
- 239000013598 vector Substances 0.000 claims description 17
- 238000004364 calculation method Methods 0.000 claims description 11
- 230000009466 transformation Effects 0.000 claims description 7
- 239000011159 matrix material Substances 0.000 claims description 6
- 238000010606 normalization Methods 0.000 claims description 3
- 230000006870 function Effects 0.000 description 6
- 230000003287 optical effect Effects 0.000 description 6
- 238000004590 computer program Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000002085 persistent effect Effects 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000001052 transient effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
- G06T7/248—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/74—Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/77—Determining position or orientation of objects or cameras using statistical methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30204—Marker
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Probability & Statistics with Applications (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a method for predicting rigid body mark point position, which comprises the steps of acquiring tracking mark data set and position data set of mark points on each frame of human body limb, and when more than 3 mark points are arranged on the human body limb and at least 2 mark points can be correctly tracked, if the limb P can be correctly trackedsThe ith frame position data of the 0 th mark pointTraceable/visible 1 st mark point ith frame position dataNo. 2 mark point ith frame position data to be predicted/invisibleThe method comprises the steps of tracking/seeing, judging whether a mark point meets a preset condition, and if not, failing to predict the data of the ith frame position of the mark point to be predicted/invisible; if the position data of the ith frame of the 1 st mark point to be predicted/invisible is consistent with the position data of the 0 th traceable/visible mark point and the 2 nd traceable/visible mark point, the position data of the ith frame of the 1 st mark point to be predicted/invisible is predicted according to the scale factor, and therefore the lost position data is successfully predicted.
Description
Technical Field
The invention relates to the technical field of motion capture, in particular to a method, equipment and a storage medium for predicting positions of rigid body mark points.
Background
The optical motion capture system can be divided into two types of motion capture systems with mark points and motion capture systems without mark points, and the motion capture systems with mark points are widely adopted due to the characteristics of strong real-time performance, high precision, strong safety and the like. In the current optical motion capture system, human motion capture is generally performed by tracking and calculating a body part through rigid body mark points worn on a human body, so as to deduce information such as human motion, and therefore, the accuracy of the mark point tracking and positioning is very critical.
However, the original mark points acquired by the system are scattered point clouds, no fixed sequence exists among the mark points, and the mark points are lost and noisy due to the phenomena of blocking and sliding of a human body in the motion process. The real-time matching of the mark points in the dynamic capturing process is a key technology based on optical human motion capturing, the accuracy of motion capturing is directly determined, and due to various factors of the dynamic capturing environment, the mark points are often shielded or easily lost due to violent motion, so that the mark points are failed to track or match, and therefore, a method for preventing the mark points from being shielded or lost is provided to improve the tracking success rate of the mark points.
Disclosure of Invention
In order to solve the above technical problems, the present application provides a method for predicting rigid body mark point positions to avoid the problem of failure in tracking mark point positions due to the rigid body mark points being blocked and lost under the condition of multi-rigid body and multi-person motion capture.
According to a first aspect, an embodiment provides a method of predicting rigid body marker point positions, comprising the steps of:
s1, acquiring a tracking mark data set of mark points on each frame of human body limbAnd location data setWherein i is the frame number, and s is the limb part;
s2, when there are more than 3 markers and at least 2 markers can be correctly tracked on the limb, if the limb P issThe ith frame position data of the 0 th mark pointTraceable/visible 1 st mark point ith frame position dataIth frame position data of 2 nd mark point to be predicted/invisibleTracking/seeing, judging whether the mark point accords with a first condition, if so, entering step S3, and if not, failing to predict the ith frame position data of the mark point to be predicted/not seen;
s3, calculating the distance between the 0 th traceable/visible mark point and the 2 nd traceable/visible mark point in the i-2 th frame and the i-1 th frame, judging whether the distance meets the second condition, if so, entering the step S4, and if not, failing to predict the position data of the i-th frame of the mark point to be predicted/invisible;
s4, calculating the scale factor of the i frame and the i-1 frame of the traceable/visible mark point, judging whether the scale factor is in the threshold range, if so, entering the step S5, and if not, failing to predict the position data of the i frame of the mark point to be predicted/invisible;
and S5, predicting the position data of the 1 st mark point to be predicted/invisible according to the 0 th and the 2 nd traceable/visible mark point position data and the scale factor, and if the predicted position data meets a third condition, the prediction is successful.
The first condition includes:
tracking marker data at frame i-2 for 0 th and 2 nd trackable/visible marker pointsAndtracking flag data at i-1 frameAndall need to be true, and, for the 1 st mark point to be predicted/invisible, the tracking flag data at frame i-1Need to be true.
The second condition includes:
wherein,for the distance between the 2 nd trackable/visible marker point at frame i-2 and frame i-1,d is a set threshold for the distance between the 0 th trackable/visible marker point at frame i-2 and frame i-1.
The judging whether the scale factor is within the threshold value range comprises: the calculation formula of the scale factor is as follows:
The predicting the position data of the 1 st mark point to be predicted/invisible according to the 0 th traceable/visible mark point position data and the 2 nd traceable/visible mark point position data and the scale factor respectively comprises the following steps:
combining the rotational transformation matrix R between the vectors formed by the 0 th and 2 nd traceable/visible mark points at the i-2 nd frame and the i-1 st frame, the position dataAndand a scale factor c, the position data of the 1 st mark point to be predicted/invisible can be predicted.
wherein the angle of rotationRotating shaftNormalization of vectorsUnit vector n of rotation axis is (nx, ny, nz), vectorOr(Vector)Or
the third condition includes: the target value of the position data of the mark point to be predicted/invisible in the 1 st frame can be calculated by combining the motion speed of the mark point in the (i-1) th frame and the (i-2) th frame:thenIt is required to be less than a set threshold dt.
According to a second aspect, an embodiment provides an apparatus for predicting rigid body marker point positions, comprising:
store
A memory having instructions stored therein and at least one processor, the memory and the at least one processor interconnected by a line; the at least one processor invokes the instructions in the memory to cause the apparatus to predict rigid body marker point locations to perform the method of the first aspect.
According to a third aspect, an embodiment provides a computer readable storage medium comprising a program executable by a processor to implement the method of the first aspect described above.
The beneficial effect of this application is:
a method for predicting positions of rigid body mark points according to the above embodimentsAcquiring a tracking mark data set and a position data set of mark points on each frame of human body limb, and if the human body limb has more than 3 mark points and at least 2 mark points can be correctly tracked, judging whether the limb P is a normal limbsThe ith frame position data of the 0 th mark pointTraceable/visible 1 st mark point ith frame position dataIth frame position data of 2 nd mark point to be predicted/invisibleThe method comprises the steps of tracking/seeing, judging whether the mark point meets a preset condition, and if not, failing to predict the ith frame position data of the mark point to be predicted/not seen; if the position data of the ith frame of the 1 st mark point to be predicted/invisible is consistent with the position data of the 0 th traceable/visible mark point and the 2 nd traceable/visible mark point, the position data of the ith frame of the 1 st mark point to be predicted/invisible is predicted according to the scale factor, and therefore the lost position data is successfully predicted.
Drawings
FIG. 1 is a schematic representation of a rigid body structure;
FIG. 2 is a flow chart of a method of predicting rigid body marker point locations;
FIG. 3 is a schematic diagram of an apparatus for predicting rigid body marker point positions.
Detailed Description
The present invention will be described in further detail with reference to the following detailed description and accompanying drawings. Wherein like elements in different embodiments are numbered with like associated elements. In the following description, numerous details are set forth in order to provide a better understanding of the present application. However, those skilled in the art will readily recognize that some of the features may be omitted or replaced with other elements, materials, methods in different instances. In some instances, certain operations related to the present application have not been shown or described in detail in order to avoid obscuring the core of the present application from excessive description, and it is not necessary for those skilled in the art to describe these operations in detail, so that they may be fully understood from the description in the specification and the general knowledge in the art.
Furthermore, the features, operations, or characteristics described in the specification may be combined in any suitable manner to form various embodiments. Also, the various steps or actions in the method descriptions may be transposed or transposed in order, as will be apparent to one of ordinary skill in the art. Thus, the various sequences in the specification and drawings are for the purpose of describing certain embodiments only and are not intended to imply a required sequence unless otherwise indicated where such sequence must be followed.
The numbering of the components as such, e.g., "second", etc., is used herein only to distinguish the objects as described, and does not have any sequential or technical meaning. The term "connected" and "coupled" when used in this application, unless otherwise indicated, includes both direct and indirect connections (couplings).
The inventive concept of the present application resides in: in the optical motion capture system, a plurality of rigid bodies with fixed shapes are required to be adopted for position tracking, the problems of similarity between the rigid bodies, shielding of the rigid bodies and the like are required to be avoided as much as possible, the method is used for further screening the mark points arranged on the rigid bodies so as to screen out proper mark points capable of improving the precision of the motion capture system, and subsequent matching calculation can be carried out according to the screened mark points so as to identify the corresponding rigid bodies.
The first embodiment is as follows:
referring to fig. 1, the present application discloses a structural diagram of a rigid body, which includes: the rigid body base 110, the support rods 120, the mark points 130 and the like are provided with at least 3 mark points, so that the accuracy of motion capture calculation can be ensured, the number of the mark points is the same as that of the support rods, different mark points are distinguished when the motion capture system calculates the coordinates of the mark points, the distances (such as M1, M2 and M3) between any two mark points are different as much as possible, namely similar triangles are avoided as much as possible, and the motion capture system is ensured to effectively identify the rigid body.
Example two:
referring to fig. 2, the present embodiment discloses a method for predicting rigid body mark point position information based on understanding the rigid body structure shown in fig. 1, and the claimed method includes steps S210-S250, which will be separately described below.
Step S210, acquiring a tracking mark data set of mark points on each frame of human body limb And location data setWherein i is the frame number, and s is the limb part;the position data of the ith frame of the mark point on the human body limb is shown,tracking mark data representing the ith frame of the mark point on the body limb, andand only two results are true and false, wherein true represents that the mark point is successfully tracked, and false represents that the mark point is failed to track, namely the mark point is lost.
Step S220, when there are more than 3 mark points on the human body and at least 2 mark points can be correctly tracked, if the body P issThe ith frame position data of the 0 th mark pointTraceable/visible 1 st mark point ith frame position dataIth frame position data of 2 nd mark point to be predicted/invisibleThe method comprises the following steps that (1) tracking/visibility is carried out, whether mark points accord with a first condition or not is judged, if yes, the step S230 is carried out, and if not, prediction of the ith frame position data of the mark points to be predicted/invisible fails;
specifically, the first condition includes:
tracking marker data at frame i-2 for 0 th and 2 nd trackable/visible marker pointsAndtracking flag data at i-1 frameAndit is true, that is, the 0 th and 2 nd traceable/visible markers can be traced successfully at the i-2 nd frame and the i-1 st frame, and for the 1 st to-be-predicted/invisible marker, the tracing mark data at the i-1 st frameTrue, i.e. the 1 st mark point to be predicted/invisible in the ith frame for which tracking fails, and which can successfully track the corresponding position data in the (i-1) th frameIf these conditions are met, the flow proceeds to step S230 to continue the prediction.
Step S230, calculating the distance between the 0 th traceable/visible mark point and the 2 nd traceable/visible mark point in the i-2 th frame and the i-1 th frame, judging whether the distance meets a second condition, if so, entering step S240, and if not, failing to predict the position data of the i-th frame of the mark point to be predicted/invisible;
specifically, the second condition includes:
wherein,for the distance between the 2 nd trackable/visible marker point at frame i-2 and frame i-1,the distance between the 0 th trackable/visible marker at frame i-2 and frame i-1 is defined as d, which is a set threshold, typically the marker diameter.
For 2 traceable/visible mark points, the distance between the i-2 th frame and the i-1 th frame cannot be too small at the same time, so as to prevent extreme situations, such as when the traceable mark point rotates around two stationary traceable mark points, the failure probability of the algorithm for predicting the position of the mark point is high, if the second condition is not met at the same time, the prediction calculation is failed, otherwise, the step S240 is entered for continuing the prediction.
Step S240, calculating scale factors of the ith frame and the (i-1) th frame of the traceable/visible mark point, judging whether the scale factors are within a threshold range, if so, entering step S5, and if not, failing to predict the ith frame position data of the to-be-predicted/invisible mark point;
the scale factor is calculated by the formula:
wherein,if the scale factor c is not too small, for example, not less than 0.02, it needs to be within a set threshold range, and thus the scale factor c calculated thereby also needs to be within a preset threshold range, otherwise, the calculation is ended, and if c is within the preset threshold range, the step S250 is proceeded to continue the prediction.
And S250, predicting the position data of the 1 st to-be-predicted/invisible mark point according to the 0 th traceable/visible mark point position data and the 2 nd traceable/visible mark point position data and the scale factor, wherein if the predicted position data meets a third condition, the prediction is successful.
Predicting the position data of the 1 st to-be-predicted/invisible mark point according to the 0 th traceable/visible mark point position data and the 2 nd traceable/visible mark point position data and the scale factor respectively specifically comprises the following steps:
combining the rotational transformation matrix R between the vectors formed by the 0 th and 2 nd traceable/visible mark points at the i-2 nd frame and the i-1 st frame, the position dataAndand a scale factor c, the position data of the 1 st mark point to be predicted/invisible can be predicted.
wherein the angle of rotationRotating shaftNormalization of vectorsUnit vector n of rotation axis is (nx, ny, nz), vectorOr(Vector)OrThat is, different vectors v can be obtained through the 0 th traceable/visible mark point and the 2 nd traceable/visible mark point respectively1And v2Thus, two rotation transformation matrices R and R' are obtained. It should be noted that the length of any vector in the calculation process cannot be too small, otherwise, the prediction calculation is considered to fail.
Through the 0 th marking pointThe position data of the 1 st mark point missing in the ith frame is obtained by calculation as follows:
also through the 2 nd mark pointCalculating the missing position data of the 1 st mark point in the ith frame, and the other value is:
the position data of the 1 st ith frame to-be-predicted/invisible mark point obtained by final prediction is as follows:
in one embodiment, to further determineThe third condition is also set, which comprises: can be provided withCorresponding target value isNamely, the target value of the position data of the mark point to be predicted/invisible in the 1 st ith frame can be calculated by combining the motion speed of the mark point in the (i-1) th frame and the (i-2) th frame:thenIt needs to be less than a set threshold dt, otherwise the predictive computation fails.
If the prediction calculation fails in any of the steps S220 to S250, the prediction calculation of the mark point to be predicted is successful, and the method can be referred to for predicting the position data of other mark points on the limb that cannot be tracked except for the position data of the mark point to be predicted/invisible in the 1 st frame listed above.
Example three:
referring to fig. 3 of the drawings, a drawing,
further, the present application also describes in detail the apparatus for predicting rigid body mark point positions in this embodiment from the perspective of hardware processing.
Fig. 3 is a schematic structural diagram of an apparatus for predicting rigid body mark point positions according to this embodiment, where the apparatus 500 may generate relatively large differences due to different configurations or performances, and may include one or more processors (CPUs) 510 (e.g., one or more processors) and a memory 520, and one or more storage media 530 (e.g., one or more mass storage devices) for storing applications 533 or data 532. Memory 520 and storage media 530 may be, among other things, transient or persistent storage. The program stored on the storage medium 530 may include one or more modules (not shown), each of which may include a sequence of instructions for operating on the device 500. Further, the processor 510 may be configured to communicate with the storage medium 530 to execute a series of instruction operations in the storage medium 530 on the device 500.
The device 500 may also include one or more power supplies 540, one or more wired or wireless network interfaces 550, one or more input-output interfaces 560, and/or one or more operating systems 531, such as Windows Server, Mac OS X, Unix, Linux, FreeBSD, and the like. Those skilled in the art will appreciate that the configuration of the apparatus 500 shown in fig. 3 does not constitute a limitation of the apparatus for predicting rigid body marker point locations provided herein, and may include more or fewer components than shown, or some components in combination, or a different arrangement of components.
The present application also provides a computer-readable storage medium, which may be a non-volatile computer-readable storage medium, and which may also be a volatile computer-readable storage medium, having stored therein instructions, which, when executed on a computer, cause the computer to perform the steps of the above-mentioned method for predicting rigid body marker point positions.
Those skilled in the art will appreciate that all or part of the functions of the various methods in the above embodiments may be implemented by hardware, or may be implemented by computer programs. When all or part of the functions of the above embodiments are implemented by a computer program, the program may be stored in a computer-readable storage medium, and the storage medium may include: a read only memory, a random access memory, a magnetic disk, an optical disk, a hard disk, etc., and the program is executed by a computer to realize the above functions. For example, the program may be stored in a memory of the device, and when the program in the memory is executed by the processor, all or part of the functions described above may be implemented. In addition, when all or part of the functions in the above embodiments are implemented by a computer program, the program may be stored in a storage medium such as a server, another computer, a magnetic disk, an optical disk, a flash disk, or a removable hard disk, and may be downloaded or copied to a memory of a local device, or may be version-updated in a system of the local device, and when the program in the memory is executed by a processor, all or part of the functions in the above embodiments may be implemented.
The present invention has been described in terms of specific examples, which are provided to aid understanding of the invention and are not intended to be limiting. For a person skilled in the art to which the invention pertains, several simple deductions, modifications or substitutions may be made according to the idea of the invention.
Claims (10)
1. A method of predicting rigid body marker point locations, comprising the steps of:
s1, acquiring a tracking mark data set of mark points on each frame of human body limbAnd location data setWherein i is the frame number, and s is the limb part;
s2, when there are more than 3 markers and at least 2 markers can be correctly tracked on the limb, if the limb P issThe ith frame position data of the 0 th mark pointTraceable/visible 1 st mark point ith frame position dataIth frame position data of 2 nd mark point to be predicted/invisibleTracking/seeing, judging whether the mark point accords with a first condition, if so, entering step S3, and if not, failing to predict the ith frame position data of the mark point to be predicted/not seen;
s3, calculating the distance between the 0 th traceable/visible mark point and the 2 nd traceable/visible mark point in the i-2 th frame and the i-1 th frame, judging whether the distance meets the second condition, if so, entering the step S4, and if not, failing to predict the position data of the i-th frame of the mark point to be predicted/invisible;
s4, calculating the scale factor of the i frame and the i-1 frame of the traceable/visible mark point, judging whether the scale factor is in the threshold range, if so, entering the step S5, and if not, failing to predict the position data of the i frame of the mark point to be predicted/invisible;
and S5, predicting the position data of the 1 st mark point to be predicted/invisible according to the 0 th and the 2 nd traceable/visible mark point position data and the scale factor, and if the predicted position data meets a third condition, the prediction is successful.
2. The method of predicting rigid body marker point locations of claim 1, wherein the first condition comprises:
3. The method of predicting rigid body marker point locations of claim 1, wherein said second condition comprises:
5. A method of predicting rigid body marker point positions as described in claim 4, wherein said predicting position data for a 1 st to-be-predicted/invisible marker point based on 0 th and 2 nd trackable/visible marker point position data, respectively, and a scale factor comprises:
8. the method of predicting rigid body marker point locations of claim 7, wherein the third condition comprises: the target value of the position data of the mark point to be predicted/invisible in the 1 st frame can be calculated by combining the motion speed of the mark point in the (i-1) th frame and the (i-2) th frame:thenIt is required to be less than a set threshold dt.
9. An apparatus for predicting rigid body marker point locations, comprising: a memory having instructions stored therein and at least one processor, the memory and the at least one processor interconnected by a line; the at least one processor invokes the instructions in the memory to cause the apparatus to predict rigid body marker point locations to perform the method of any of claims 1-8.
10. A computer-readable storage medium, characterized by comprising a program executable by a processor to implement the method of any one of claims 1-8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111679549.4A CN114463383A (en) | 2021-12-31 | 2021-12-31 | Method, equipment and storage medium for predicting rigid body mark point position |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111679549.4A CN114463383A (en) | 2021-12-31 | 2021-12-31 | Method, equipment and storage medium for predicting rigid body mark point position |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114463383A true CN114463383A (en) | 2022-05-10 |
Family
ID=81408623
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111679549.4A Pending CN114463383A (en) | 2021-12-31 | 2021-12-31 | Method, equipment and storage medium for predicting rigid body mark point position |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114463383A (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110689584A (en) * | 2019-09-30 | 2020-01-14 | 深圳市瑞立视多媒体科技有限公司 | Active rigid body pose positioning method in multi-camera environment and related equipment |
US20200105000A1 (en) * | 2018-09-28 | 2020-04-02 | Glo Big Boss Ltd. | Systems and methods for real-time rigid body motion prediction |
CN111462187A (en) * | 2020-04-09 | 2020-07-28 | 成都大学 | Non-rigid target tracking method based on multi-feature fusion |
CN111462089A (en) * | 2020-04-01 | 2020-07-28 | 深圳市瑞立视多媒体科技有限公司 | Virtual scene precision testing method based on optical dynamic capture system and related equipment |
CN113592898A (en) * | 2021-05-13 | 2021-11-02 | 黑龙江省科学院智能制造研究所 | Method for reconstructing missing mark in motion capture |
-
2021
- 2021-12-31 CN CN202111679549.4A patent/CN114463383A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200105000A1 (en) * | 2018-09-28 | 2020-04-02 | Glo Big Boss Ltd. | Systems and methods for real-time rigid body motion prediction |
CN110689584A (en) * | 2019-09-30 | 2020-01-14 | 深圳市瑞立视多媒体科技有限公司 | Active rigid body pose positioning method in multi-camera environment and related equipment |
CN111462089A (en) * | 2020-04-01 | 2020-07-28 | 深圳市瑞立视多媒体科技有限公司 | Virtual scene precision testing method based on optical dynamic capture system and related equipment |
CN111462187A (en) * | 2020-04-09 | 2020-07-28 | 成都大学 | Non-rigid target tracking method based on multi-feature fusion |
CN113592898A (en) * | 2021-05-13 | 2021-11-02 | 黑龙江省科学院智能制造研究所 | Method for reconstructing missing mark in motion capture |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11402509B2 (en) | Method and system for use in performing localisation | |
EP3309751B1 (en) | Image processing device, method, and program | |
CN112215955A (en) | Rigid body mark point screening method, device, system, equipment and storage medium | |
CN115965657B (en) | Target tracking method, electronic device, storage medium and vehicle | |
US11403779B2 (en) | Methods, apparatuses, systems, and storage media for loading visual localization maps | |
CN110969145A (en) | Remote sensing image matching optimization method and device, electronic equipment and storage medium | |
US10832417B1 (en) | Fusion of visual-inertial-odometry and object tracker for physically anchored augmented reality | |
CN110555352B (en) | Interest point identification method, device, server and storage medium | |
CN112734827B (en) | Target detection method and device, electronic equipment and storage medium | |
CN108304578B (en) | Map data processing method, medium, device and computing equipment | |
CN114463383A (en) | Method, equipment and storage medium for predicting rigid body mark point position | |
CN114638890A (en) | Method, apparatus and storage medium for predicting rigid body mark point position | |
WO2019183928A1 (en) | Indoor robot positioning method and robot | |
CN113112412A (en) | Generation method and device of vertical correction matrix and computer readable storage medium | |
CN117333686A (en) | Target positioning method, device, equipment and medium | |
CN114998755A (en) | Method and device for matching landmarks in remote sensing image | |
KR102680209B1 (en) | Apparatus for controlling posture of satellite antenna and method of operation thereof | |
CN118247591B (en) | Dynamic mask eliminating method based on multiple geometric constraints | |
CN115587504B (en) | Space target collision early warning method and device, electronic equipment and medium | |
CN111650621B (en) | Method and device for calculating and detecting static drift precision, equipment and storage medium | |
CN118521807B (en) | Mark point pairing method, medium and equipment | |
KR102718674B1 (en) | Apparatus for coordinate transformation for satellite antenna and method of operation thereof | |
CN112053381B (en) | Image processing method, device, electronic equipment and storage medium | |
CN118982567A (en) | Method for registering coal mine three-dimensional point cloud and removing fixed facilities in warehouse environment | |
CN109785731B (en) | Map construction method, system and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |