CN111191536A - Motion capture system and method based on 5G communication technology - Google Patents
Motion capture system and method based on 5G communication technology Download PDFInfo
- Publication number
- CN111191536A CN111191536A CN201911316328.3A CN201911316328A CN111191536A CN 111191536 A CN111191536 A CN 111191536A CN 201911316328 A CN201911316328 A CN 201911316328A CN 111191536 A CN111191536 A CN 111191536A
- Authority
- CN
- China
- Prior art keywords
- motion
- model
- motion capture
- data
- acquisition module
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000033001 locomotion Effects 0.000 title claims abstract description 70
- 238000004891 communication Methods 0.000 title claims abstract description 30
- 238000000034 method Methods 0.000 title claims abstract description 23
- 238000005516 engineering process Methods 0.000 title claims abstract description 17
- 230000009471 action Effects 0.000 claims abstract description 45
- 238000004364 calculation method Methods 0.000 claims description 7
- 238000005452 bending Methods 0.000 claims description 6
- 210000000988 bone and bone Anatomy 0.000 claims description 6
- 239000011159 matrix material Substances 0.000 claims description 6
- 230000009286 beneficial effect Effects 0.000 abstract description 3
- 238000010586 diagram Methods 0.000 description 7
- 238000004590 computer program Methods 0.000 description 5
- 230000000694 effects Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 1
- 239000000470 constituent Substances 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 239000002245 particle Substances 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
- G06T13/40—3D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- General Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Social Psychology (AREA)
- Psychiatry (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The invention discloses a motion capture system and method based on a 5G communication technology, which relate to the technical field of motion capture and are used for realizing the following steps: the device is arranged on the whole body and the joint points of the object to be detected through an action acquisition module, and the action of the limbs of the object to be detected is acquired as action data; and receiving the information acquired by the motion acquisition module, summarizing the information and sending the information to the model workstation, creating a three-dimensional character model by the workstation, and driving the three-dimensional character model by the limb motion captured by the motion acquisition module. The invention has the beneficial effects that: the problem that a traditional motion capture system needs wired connection is solved, requirements on capture scenes and places are lower, the freedom degree of motion of performers is increased, the influence of entanglement of wires is not received when the performers perform, the technical cost is reduced, the popularization and the application of consumption level are facilitated, and the motion captured simultaneously is more comprehensive.
Description
Technical Field
The invention relates to the technical field of motion capture, in particular to a motion capture system and method based on a 5G communication technology.
Background
3D animation is a popular field in the film and television field at present, and the method can realize various special effects which cannot be realized in the past, wherein 3D characters have almost gone to the ground of chaos, while the traditional motion capture system generally performs in a fixed room, but the traditional motion capture system can not carry out remote transmission, and can not carry out wireless use because USB data line link is used, and meanwhile, the wired connection limits the motion amplitude of actors and capture scenes and places, thereby being not beneficial to technical cost reduction and consumption level popularization and being not beneficial to comprehensive capture of motion types.
Disclosure of Invention
In order to solve at least one of the technical problems in the prior art, the invention aims to provide a motion capture system and method based on a 5G communication technology, wherein the motion capture system is installed on the whole body and joint points of a to-be-detected object through a motion acquisition module, and is used for acquiring the limb motions of the to-be-detected object as motion data; and receiving the information acquired by the motion acquisition module, summarizing the information and sending the information to the model workstation, creating a three-dimensional character model by the workstation, and driving the three-dimensional character model by the limb motion captured by the motion acquisition module.
The first aspect of the technical scheme adopted by the invention to solve the problems is as follows: a motion capture system based on 5G communication technology, comprising: the motion acquisition module is used for being installed on the whole body and the joint points of the object to be detected and acquiring the limb motions of the object to be detected as motion data; the 5G communication module is used for receiving the information acquired by the action acquisition module and summarizing and sending the information to the model workstation; and the model workstation is used for creating a three-dimensional character model and driving the three-dimensional character model through the limb actions captured by the action acquisition module.
Has the advantages that: the problem that a traditional motion capture system needs wired connection is solved, requirements on capture scenes and places are lower, the freedom degree of motion of performers is increased, the influence of entanglement of wires is not received when the performers perform, the technical cost is reduced, the popularization and the application of consumption level are facilitated, and the motion captured simultaneously is more comprehensive.
According to a first aspect of the invention, the motion capture module comprises: and the inertial sensing unit is used for acquiring the data of the joint posture of the object to be detected.
According to the first aspect of the present invention, the motion capture module further comprises: the pressure sensing unit is used for measuring the applied pressure value between each joint of the object to be measured; and the rotation sensing unit is used for measuring the bending angles of the joints and the fingers of the object to be measured.
According to the first aspect of the invention, the model workstation further comprises: the data processing module is used for receiving the information sent by the 5G communication module and carrying out corresponding processing to obtain processed action data; and the skeleton model establishing module is used for establishing a skeleton model and applying the action data processed by the data processing module.
According to the first aspect of the invention, the model workstation further comprises: the data calculation unit is used for performing real-time attitude calculation processing on a computer to complete real-time driving on all joints of the human body model; and the rotation parameter calculating unit is used for calculating a rotation matrix among bones based on the attitude data acquired by the action acquisition module.
The second aspect of the technical scheme adopted by the invention to solve the problems is as follows: a motion capture method based on 5G communication technology is characterized by comprising the following steps: s10, wearing an action acquisition module by the object to be detected, and acquiring limb actions of the object to be detected, wherein the action acquisition module is installed on the whole body and joint points of the object to be detected as action data; the S20 and 5G communication module receives the information collected by the action collection module and collects and sends the information to the model workstation; and S30, the model workstation creates a three-dimensional character model and drives the three-dimensional character model through the limb motion captured by the motion acquisition module.
Has the advantages that: the problem that a traditional motion capture system needs wired connection is solved, requirements on capture scenes and places are lower, the freedom degree of motion of performers is increased, the influence of entanglement of wires is not received when the performers perform, the technical cost is reduced, the popularization and the application of consumption level are facilitated, and the motion captured simultaneously is more comprehensive.
According to the second aspect of the present invention, the limb movement is data of the joint posture of the object to be measured collected by the inertial sensing unit.
According to the second aspect of the present invention, S10 further includes: s11, measuring the applied pressure value between each joint of the object to be measured; and S12, measuring the bending angles of the joints and the fingers of the object to be measured.
According to the second aspect of the present invention, S30 further includes: s31, receiving the information sent by the 5G communication module and carrying out corresponding processing to obtain the processed action data; and S32, establishing a skeleton model and applying the collected motion data.
According to the second aspect of the present invention, S30 further includes: the posture resolving step, namely performing real-time posture resolving processing to complete real-time driving of all joints of the human body model on a computer; and a rotation parameter calculating step, namely calculating a rotation matrix among all bones based on the attitude data acquired by the action acquisition module.
Drawings
The invention is further described below with reference to the accompanying drawings and examples;
FIG. 1 is a schematic diagram of a system architecture according to a preferred embodiment of the present invention;
FIG. 2 is a schematic flow diagram of a method according to a preferred embodiment of the present invention;
FIG. 3 is a schematic diagram of motion capture according to a preferred embodiment of the present invention;
fig. 4 is a schematic diagram of an inertial sensor according to a preferred embodiment of the invention.
Detailed Description
Reference will now be made in detail to the present preferred embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to like elements throughout.
In the description of the present invention, it should be understood that the orientation or positional relationship referred to in the description of the orientation, such as the upper, lower, front, rear, left, right, etc., is based on the orientation or positional relationship shown in the drawings, and is only for convenience of description and simplification of description, and does not indicate or imply that the device or element referred to must have a specific orientation, be constructed and operated in a specific orientation, and thus, should not be construed as limiting the present invention.
It should be noted that, unless otherwise specified, when a feature is referred to as being "fixed" or "connected" to another feature, it may be directly fixed or connected to the other feature or indirectly fixed or connected to the other feature. Furthermore, the descriptions of upper, lower, left, right, etc. used in the present disclosure are only relative to the mutual positional relationship of the constituent parts of the present disclosure in the drawings. As used in this disclosure, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. Furthermore, unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art. The terminology used in the description herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the term "and/or" includes any combination of one or more of the associated listed items.
It will be understood that, although the terms first, second, third, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element of the same type from another. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of the present disclosure. The use of any and all examples, or exemplary language ("e.g.," such as "or the like") provided herein, is intended merely to better illuminate embodiments of the invention and does not pose a limitation on the scope of the invention unless otherwise claimed.
Referring to fig. 1, there is a schematic diagram of a system structure according to a preferred embodiment of the present invention, including:
the motion acquisition module is used for being installed on the whole body and the joint points of the object to be detected and acquiring the limb motions of the object to be detected as motion data;
the 5G communication module is used for receiving the information acquired by the action acquisition module and summarizing and sending the information to the model workstation;
and the model workstation is used for creating a three-dimensional character model and driving the three-dimensional character model through the limb actions captured by the action acquisition module.
The action acquisition module comprises:
and the inertial sensing unit is used for acquiring the data of the joint posture of the object to be detected.
The action acquisition module further comprises:
the pressure sensing unit is used for measuring the applied pressure value between each joint of the object to be measured;
and the rotation sensing unit is used for measuring the bending angles of the joints and the fingers of the object to be measured.
The model workstation further comprises:
the data processing module is used for receiving the information sent by the 5G communication module and carrying out corresponding processing to obtain processed action data;
and the skeleton model establishing module is used for establishing a skeleton model and applying the action data processed by the data processing module.
The model workstation further comprises:
the data calculation unit is used for performing real-time attitude calculation processing on a computer to complete real-time driving of all joints of the human body model;
and the rotation parameter calculating unit is used for calculating a rotation matrix among the bones based on the attitude data acquired by the action acquisition module.
Referring to fig. 2, a flow chart of a method according to a preferred embodiment of the invention is shown, including:
s10, wearing an action acquisition module by the object to be detected, and acquiring limb actions of the object to be detected, wherein the action acquisition module is installed on the whole body and joint points of the object to be detected as action data;
the S20 and 5G communication module receives the information collected by the action collection module and collects and sends the information to the model workstation;
and S30, the model workstation creates a three-dimensional character model and drives the three-dimensional character model through the limb motion captured by the motion acquisition module.
The limb movement is data of the joint posture of the object to be detected, which is acquired by the inertial sensing unit.
S10 further includes:
s11, measuring the applied pressure value between each joint of the object to be measured;
and S12, measuring the bending angles of the joints and the fingers of the object to be measured.
S30 further includes:
s31, receiving the information sent by the 5G communication module and carrying out corresponding processing to obtain processed action data;
and S32, establishing a skeleton model and applying the collected motion data.
S30 further includes:
the posture resolving step, namely performing real-time posture resolving processing to complete real-time driving of all joints of the human body model on a computer;
and a rotation parameter calculation step, namely calculating a rotation matrix among all bones based on the attitude data acquired by the action acquisition module.
Referring to FIG. 3, a motion capture diagram according to a preferred embodiment of the present invention is shown:
and the actor wearing action acquisition module consists of each binding band and each inertial sensor which are arranged on the joint.
Referring to fig. 4, a schematic diagram of an inertial sensor according to a preferred embodiment of the invention:
the signal processing center of the motion acquisition module consists of a signal lamp, a host and an antenna, and the binding band interfaces are used for connecting each binding band to play a role in fixing.
It should be recognized that embodiments of the present invention can be realized and implemented by computer hardware, a combination of hardware and software, or by computer instructions stored in a non-transitory computer readable memory. The methods may be implemented in a computer program using standard programming techniques, including a non-transitory computer-readable storage medium configured with the computer program, where the storage medium so configured causes a computer to operate in a specific and predefined manner, according to the methods and figures described in the detailed description. Each program may be implemented in a high level procedural or object oriented programming language to communicate with a computer system. However, the program(s) can be implemented in assembly or machine language, if desired. In any case, the language may be a compiled or interpreted language. Furthermore, the program can be run on a programmed application specific integrated circuit for this purpose.
Further, the operations of processes described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The processes described herein (or variations and/or combinations thereof) may be performed under the control of one or more computer systems configured with executable instructions, and may be implemented as code (e.g., executable instructions, one or more computer programs, or one or more applications) collectively executed on one or more processors, by hardware, or combinations thereof. The computer program includes a plurality of instructions executable by one or more processors.
Further, the method may be implemented in any type of computing platform operatively connected to a suitable interface, including but not limited to a personal computer, mini computer, mainframe, workstation, networked or distributed computing environment, separate or integrated computer platform, or in communication with a charged particle tool or other imaging device, and the like. Aspects of the invention may be embodied in machine-readable code stored on a non-transitory storage medium or device, whether removable or integrated into a computing platform, such as a hard disk, optically read and/or write storage medium, RAM, ROM, or the like, such that it may be read by a programmable computer, which when read by the storage medium or device, is operative to configure and operate the computer to perform the procedures described herein. Further, the machine-readable code, or portions thereof, may be transmitted over a wired or wireless network. The invention described herein includes these and other different types of non-transitory computer-readable storage media when such media include instructions or programs that implement the steps described above in conjunction with a microprocessor or other data processor. The invention also includes the computer itself when programmed according to the methods and techniques described herein.
A computer program can be applied to input data to perform the functions described herein to transform the input data to generate output data that is stored to non-volatile memory. The output information may also be applied to one or more output devices, such as a display. In a preferred embodiment of the invention, the transformed data represents physical and tangible objects, including particular visual depictions of physical and tangible objects produced on a display.
The above description is only a preferred embodiment of the present invention, and the present invention is not limited to the above embodiment, and any modifications, equivalent substitutions, improvements, etc. within the spirit and principle of the present invention should be included in the protection scope of the present invention as long as the technical effects of the present invention are achieved by the same means. The invention is capable of other modifications and variations in its technical solution and/or its implementation, within the scope of protection of the invention.
Claims (10)
1. A motion capture system based on 5G communication technology, comprising:
the motion acquisition module is used for being installed on the whole body and the joint points of the object to be detected and acquiring the limb motions of the object to be detected as motion data;
the 5G communication module is used for receiving the information acquired by the action acquisition module and summarizing and sending the information to the model workstation;
and the model workstation is used for creating a three-dimensional character model and driving the three-dimensional character model through the limb actions captured by the action acquisition module.
2. The 5G communication technology based motion capture system of claim 1, wherein the motion capture module comprises:
and the inertial sensing unit is used for acquiring the data of the joint posture of the object to be detected.
3. The 5G communication technology based motion capture system of claim 1, wherein the motion capture module further comprises:
the pressure sensing unit is used for measuring the applied pressure value between each joint of the object to be measured;
and the rotation sensing unit is used for measuring the bending angles of the joints and the fingers of the object to be measured.
4. The 5G communication technology based motion capture system of claim 1, wherein the model workstation further comprises:
the data processing module is used for receiving the information sent by the 5G communication module and carrying out corresponding processing to obtain the processed action data;
and the skeleton model establishing module is used for establishing a skeleton model and applying the action data processed by the data processing module.
5. The 5G communication technology based motion capture system of claim 1, wherein the model workstation further comprises:
the data calculation unit is used for performing real-time attitude calculation processing on a computer to complete real-time driving of all joints of the human body model;
and the rotation parameter calculating unit is used for calculating a rotation matrix among bones based on the attitude data acquired by the action acquisition module.
6. A motion capture method based on 5G communication technology is characterized by comprising the following steps:
s10, wearing an action acquisition module by the object to be detected, and acquiring limb actions of the object to be detected, wherein the action acquisition module is installed on the whole body and joint points of the object to be detected as action data;
the S20 and 5G communication module receives the information collected by the action collection module and collects and sends the information to the model workstation;
and S30, the model workstation creates a three-dimensional character model and drives the three-dimensional character model through the limb motion captured by the motion acquisition module.
7. The motion capture method based on 5G communication technology according to claim 6, wherein the limb movement is data of joint posture of the object to be measured acquired by an inertial sensing unit.
8. The motion capture method based on 5G communication technology as claimed in claim 6, wherein the S10 further comprises:
s11, measuring the applied pressure value between each joint of the object to be measured;
and S12, measuring the bending angles of the joints and the fingers of the object to be measured.
9. The motion capture method based on 5G communication technology as claimed in claim 6, wherein the S30 further comprises:
s31, receiving the information sent by the 5G communication module and carrying out corresponding processing to obtain the processed action data;
and S32, establishing a skeleton model and applying the collected motion data.
10. The motion capture method based on 5G communication technology as claimed in claim 6, wherein the S30 further comprises:
the posture resolving step, namely performing real-time posture resolving processing to complete real-time driving of all joints of the human body model on a computer;
and a rotation parameter calculating step, namely calculating a rotation matrix among all bones based on the attitude data acquired by the action acquisition module.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911316328.3A CN111191536A (en) | 2019-12-19 | 2019-12-19 | Motion capture system and method based on 5G communication technology |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911316328.3A CN111191536A (en) | 2019-12-19 | 2019-12-19 | Motion capture system and method based on 5G communication technology |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111191536A true CN111191536A (en) | 2020-05-22 |
Family
ID=70707365
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911316328.3A Pending CN111191536A (en) | 2019-12-19 | 2019-12-19 | Motion capture system and method based on 5G communication technology |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111191536A (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107274464A (en) * | 2017-05-31 | 2017-10-20 | 珠海金山网络游戏科技有限公司 | A kind of methods, devices and systems of real-time, interactive 3D animations |
CN109241909A (en) * | 2018-09-06 | 2019-01-18 | 闫维新 | A kind of long-range dance movement capture evaluating system based on intelligent terminal |
CN109785415A (en) * | 2018-12-18 | 2019-05-21 | 武汉西山艺创文化有限公司 | A kind of movement acquisition system and its method based on ectoskeleton technology |
TWI666571B (en) * | 2018-01-25 | 2019-07-21 | 首羿國際股份有限公司 | Motion capture system for virtual reality environment |
-
2019
- 2019-12-19 CN CN201911316328.3A patent/CN111191536A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107274464A (en) * | 2017-05-31 | 2017-10-20 | 珠海金山网络游戏科技有限公司 | A kind of methods, devices and systems of real-time, interactive 3D animations |
TWI666571B (en) * | 2018-01-25 | 2019-07-21 | 首羿國際股份有限公司 | Motion capture system for virtual reality environment |
CN109241909A (en) * | 2018-09-06 | 2019-01-18 | 闫维新 | A kind of long-range dance movement capture evaluating system based on intelligent terminal |
CN109785415A (en) * | 2018-12-18 | 2019-05-21 | 武汉西山艺创文化有限公司 | A kind of movement acquisition system and its method based on ectoskeleton technology |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107820593B (en) | Virtual reality interaction method, device and system | |
CN108986189B (en) | Method and system for capturing and live broadcasting of real-time multi-person actions based on three-dimensional animation | |
CN107833271B (en) | Skeleton redirection method and device based on Kinect | |
US11940774B2 (en) | Action imitation method and robot and computer readable storage medium using the same | |
US20130010071A1 (en) | Methods and systems for mapping pointing device on depth map | |
CN103578135A (en) | Virtual image and real scene combined stage interaction integrating system and realizing method thereof | |
CN109176512A (en) | A kind of method, robot and the control device of motion sensing control robot | |
KR101347840B1 (en) | Body gesture recognition method and apparatus | |
WO2014071254A4 (en) | Wireless wrist computing and control device and method for 3d imaging, mapping, networking and interfacing | |
CN109671141B (en) | Image rendering method and device, storage medium and electronic device | |
CN109732593B (en) | Remote control method and device for robot and terminal equipment | |
CN109840508A (en) | One robot vision control method searched for automatically based on the depth network architecture, equipment and storage medium | |
CN108139801A (en) | For performing the system and method for electronical display stabilization via light field rendering is retained | |
CN108828996A (en) | A kind of the mechanical arm remote control system and method for view-based access control model information | |
CN203630822U (en) | Virtual image and real scene combined stage interaction integrating system | |
CN110572635A (en) | Method, equipment and system for tracking and positioning handheld control equipment | |
JP2015118442A (en) | Information processor, information processing method, and program | |
CN111113429B (en) | Action simulation method, action simulation device and terminal equipment | |
WO2018006481A1 (en) | Motion-sensing operation method and device for mobile terminal | |
CN111191536A (en) | Motion capture system and method based on 5G communication technology | |
CN111192350A (en) | Motion capture system and method based on 5G communication VR helmet | |
CN106249902A (en) | Multimedium virtual display platform | |
Boyali et al. | 3D and 6 DOF user input platform for computer vision applications and virtual reality | |
CN110069138B (en) | Control method and device for Internet of things equipment | |
US20210090201A1 (en) | Visual interface and communications techniques for use with robots |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200522 |
|
RJ01 | Rejection of invention patent application after publication |