CN113505637A - Real-time virtual anchor motion capture method and system for live streaming - Google Patents
Real-time virtual anchor motion capture method and system for live streaming Download PDFInfo
- Publication number
- CN113505637A CN113505637A CN202110587213.9A CN202110587213A CN113505637A CN 113505637 A CN113505637 A CN 113505637A CN 202110587213 A CN202110587213 A CN 202110587213A CN 113505637 A CN113505637 A CN 113505637A
- Authority
- CN
- China
- Prior art keywords
- information
- marker point
- coordinate
- analysis
- obtaining
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 60
- 238000004458 analytical method Methods 0.000 claims abstract description 203
- 239000003550 marker Substances 0.000 claims abstract description 190
- 238000009826 distribution Methods 0.000 claims abstract description 86
- 230000009471 action Effects 0.000 claims abstract description 81
- 230000008859 change Effects 0.000 claims abstract description 76
- 238000005070 sampling Methods 0.000 claims abstract description 30
- 230000002159 abnormal effect Effects 0.000 claims description 61
- 230000014509 gene expression Effects 0.000 claims description 30
- 238000011156 evaluation Methods 0.000 claims description 23
- 238000012545 processing Methods 0.000 claims description 21
- 238000012216 screening Methods 0.000 claims description 14
- 238000012549 training Methods 0.000 claims description 14
- 238000004590 computer program Methods 0.000 claims description 10
- 238000004891 communication Methods 0.000 claims description 5
- 230000000694 effects Effects 0.000 abstract description 29
- 238000013461 design Methods 0.000 abstract description 12
- 230000008569 process Effects 0.000 abstract description 12
- 238000010586 diagram Methods 0.000 description 11
- 230000003044 adaptive effect Effects 0.000 description 5
- 238000010276 construction Methods 0.000 description 5
- 238000012986 modification Methods 0.000 description 5
- 230000004048 modification Effects 0.000 description 5
- 230000006870 function Effects 0.000 description 4
- 238000003062 neural network model Methods 0.000 description 4
- 238000003860 storage Methods 0.000 description 3
- 230000005856 abnormality Effects 0.000 description 2
- 230000015572 biosynthetic process Effects 0.000 description 2
- 230000003631 expected effect Effects 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 238000003786 synthesis reaction Methods 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000013178 mathematical model Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
Images
Landscapes
- Processing Or Creating Images (AREA)
Abstract
The invention discloses a real-time virtual anchor action capturing method and a real-time virtual anchor action capturing system for live streaming, wherein first budget information is obtained according to basic information, and a first MARKER point quantity analysis result is obtained based on first virtual character information; obtaining a second MARKER point quantity analysis result; analyzing the motion capture importance degree of the first virtual character information, inputting the number analysis result of the second MARKER points and the first analysis result into a sampling point intelligent distribution model, obtaining a first MARKER point distribution scheme and executing the first MARKER point distribution scheme; obtaining a real-time feedback result of the first MARKER point distribution scheme; and constructing a characteristic motion change coordinate based on the real-time feedback result to realize the real-time motion capture of the first user. The technical problem that in the process of real-time virtual motion capture in the prior art, motion capture design cannot be intelligently carried out according to actual requirements of a user, and then the difference between motion capture effect and ideal effect of the user is caused is solved.
Description
Technical Field
The invention relates to the field of live action capture, in particular to a real-time virtual anchor action capture method and system for live streaming.
Background
The live virtual anchor is characterized in that the live broadcast of the virtual character is carried out by selecting the virtual character image, capturing character images, action characteristics and the like in real time through customized characteristic capture, carrying out real-time live broadcast of the virtual character synthesized by the virtual character, realizing real-time synthesis and response according to the language and the action of the character, interacting with live broadcast groups, and realistically simulating and reproducing various complex expressions and actions of real people so as to achieve the ideal expected effect of live broadcast.
However, in the process of implementing the technical solution of the invention in the embodiments of the present application, the inventors of the present application find that the above-mentioned technology has at least the following technical problems:
in the process of real-time virtual motion capture in the prior art, the technical problem that motion capture design cannot be intelligently carried out according to the actual requirements of a user, and then the difference between the motion capture effect and the ideal effect of the user is caused exists.
Disclosure of Invention
The embodiment of the application provides a real-time virtual anchor motion capture method and system for live streaming, solves the technical problems that in the process of real-time virtual motion capture in the prior art, motion capture design cannot be intelligently carried out according to the actual requirements of users, and further the difference between the motion capture effect and the ideal effect of the users is caused, achieves the purpose of intelligently carrying out adaptive motion capture design according to the information and the requirements of the users, ensures the motion capture effect on the basis of controlling the cost of the users, and achieves the technical effect of improving the live experience of the users.
In view of the foregoing problems, embodiments of the present application provide a method and system for capturing a live virtual anchor motion in real time for a live stream.
In a first aspect, the present application provides a real-time virtual anchor motion capture method for a live stream, the method being applied to a motion capture analysis processing system, the system being communicatively connected to a first light receiving device, the method including: acquiring basic information of a first user; obtaining first budget information according to the basic information, wherein the first budget information is a total live broadcast cost budget of the first user; obtaining first virtual character information of the first user, and performing MARKER point quantity analysis based on the first virtual character information to obtain a first MARKER point quantity analysis result; obtaining information of a first device of the first user according to the first budget information; obtaining a first analysis instruction, and performing quantity screening on the number of MARKER points according to the information of the first analysis instruction based on the first equipment to obtain a second MARKER point quantity analysis result; obtaining a second analysis instruction, and performing motion capture importance degree analysis on the first virtual character information according to the second analysis instruction to obtain a first analysis result; inputting the second MARKER point quantity analysis result and the first analysis result into a sampling point intelligent distribution model to obtain a first MARKER point distribution scheme; obtaining a first execution instruction, and executing the first MARKER point distribution scheme on the first user according to the first execution instruction; obtaining a real-time feedback result of the first MARKER point distribution scheme through the first light receiving device; and constructing a characteristic action change coordinate based on the real-time feedback result, and realizing real-time action capture of the first user according to the characteristic action change coordinate.
In another aspect, the present application further provides a real-time virtual anchor motion capture system for live streaming, the system comprising: a first obtaining unit, configured to obtain basic information of a first user; a second obtaining unit, configured to obtain first budget information according to the basic information, where the first budget information is a total live cost budget of the first user; a third obtaining unit, configured to obtain first virtual character information of the first user, perform MARKER point number analysis based on the first virtual character information, and obtain a first MARKER point number analysis result; a fourth obtaining unit, configured to obtain information of the first device of the first user according to the first budget information; a fifth obtaining unit, configured to obtain a first analysis instruction, perform quantity screening on MARKER point quantities according to the first analysis instruction based on the information of the first device, and obtain a second MARKER point quantity analysis result; a sixth obtaining unit, configured to obtain a second analysis instruction, perform motion capture importance degree analysis on the first avatar information according to the second analysis instruction, and obtain a first analysis result; the first input unit is used for inputting the number analysis result of the second MARKER points and the first analysis result into a sampling point intelligent distribution model to obtain a first MARKER point distribution scheme; a seventh obtaining unit, configured to obtain a first execution instruction, and execute the first MARKER point distribution scheme on the first user according to the first execution instruction; an eighth obtaining unit, configured to obtain a real-time feedback result of the first MARKER point distribution scheme through a first light receiving device; and the first implementation unit is used for constructing a characteristic action change coordinate based on the real-time feedback result and implementing real-time action capture of the first user according to the characteristic action change coordinate.
In a third aspect, the present invention provides a real-time virtual anchor motion capture system for live streaming, comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the steps of the method of the first aspect when executing the program.
One or more technical solutions provided in the embodiments of the present application have at least the following technical effects or advantages:
the method comprises the steps of obtaining first budget information of a first user by obtaining basic information of the first user, carrying out MARKER point quantity analysis based on first virtual character information selected by the first user to obtain an analysis result of a first MARKER point quantity, obtaining information of a first device of the first user by the first budget information, carrying out quantity screening on the MARKER point quantity based on the first device by a first analysis instruction, selecting an adapted MARKER point quantity of the first user, obtaining a second analysis instruction, carrying out analysis on the importance degree of action support of the first virtual character according to the second analysis instruction to obtain a first analysis result, inputting a sampling point intelligent distribution model based on the first analysis result and the MARKER point quantity to obtain a first MARKER point distribution scheme, obtaining a first execution instruction, and executing the first MARKER point distribution scheme on the first user based on the first execution instruction, the real-time feedback result of the first MARKER point scheme is obtained based on the first light receiving device, the characteristic dynamic change coordinate is established based on the real-time feedback result, the real-time motion capture of the first user is realized through the characteristic motion change coordinate, the purpose of intelligently carrying out adaptive motion capture design according to the information and the requirement of the user is achieved, the motion capture effect is guaranteed on the basis of controlling the cost of the user, and the technical effect of improving the live broadcast experience of the user is achieved.
The foregoing description is only an overview of the technical solutions of the present application, and the present application can be implemented according to the content of the description in order to make the technical means of the present application more clearly understood, and the following detailed description of the present application is given in order to make the above and other objects, features, and advantages of the present application more clearly understandable.
Drawings
Fig. 1 is a schematic flowchart illustrating a real-time virtual anchor motion capture method for a live stream according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of a real-time virtual anchor motion capture method for a live stream according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of an exemplary electronic device according to an embodiment of the present application.
Description of reference numerals: a first obtaining unit 11, a second obtaining unit 12, a third obtaining unit 13, a fourth obtaining unit 14, a fifth obtaining unit 15, a sixth obtaining unit 16, a first input unit 17, a seventh obtaining unit 18, an eighth obtaining unit 19, a first implementing unit 20, a bus 300, a receiver 301, a processor 302, a transmitter 303, a memory 304, and a bus interface 305.
Detailed Description
The embodiment of the application provides a method and a system for capturing the motion of a real-time virtual anchor of a live broadcast stream, and solves the technical problem that in the process of capturing the real-time virtual motion in the prior art, the motion capture design cannot be intelligently performed according to the actual requirements of a user, and further the motion capture effect is inconsistent with the ideal effect of the user, so that the purposes of intelligently performing adaptive motion capture design according to the information and the requirements of the user, ensuring the motion capture effect on the basis of controlling the cost of the user and improving the live broadcast experience of the user are achieved. Hereinafter, example embodiments according to the present application will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are merely some embodiments of the present application and not all embodiments of the present application, and it should be understood that the present application is not limited to the example embodiments described herein.
Summary of the application
The live virtual anchor is characterized in that the live broadcast of the virtual character is carried out by selecting the virtual character image, capturing character images, action characteristics and the like in real time through customized characteristic capture, carrying out real-time live broadcast of the virtual character synthesized by the virtual character, realizing real-time synthesis and response according to the language and the action of the character, interacting with live broadcast groups, and realistically simulating and reproducing various complex expressions and actions of real people so as to achieve the ideal expected effect of live broadcast. In the process of real-time virtual motion capture in the prior art, the technical problem that motion capture design cannot be intelligently carried out according to the actual requirements of a user, and then the motion capture effect is inconsistent with the ideal effect of the user exists.
In view of the above technical problems, the technical solution provided by the present application has the following general idea:
the embodiment of the application provides a real-time virtual anchor motion capture method for a live stream, which is applied to a motion capture analysis processing system, wherein the system is in communication connection with a first light receiving device, and the method comprises the following steps: acquiring basic information of a first user; obtaining first budget information according to the basic information, wherein the first budget information is a total live broadcast cost budget of the first user; obtaining first virtual character information of the first user, and performing MARKER point quantity analysis based on the first virtual character information to obtain a first MARKER point quantity analysis result; obtaining information of a first device of the first user according to the first budget information; obtaining a first analysis instruction, and performing quantity screening on the number of MARKER points according to the information of the first analysis instruction based on the first equipment to obtain a second MARKER point quantity analysis result; obtaining a second analysis instruction, and performing motion capture importance degree analysis on the first virtual character information according to the second analysis instruction to obtain a first analysis result; inputting the second MARKER point quantity analysis result and the first analysis result into a sampling point intelligent distribution model to obtain a first MARKER point distribution scheme; obtaining a first execution instruction, and executing the first MARKER point distribution scheme on the first user according to the first execution instruction; obtaining a real-time feedback result of the first MARKER point distribution scheme through the first light receiving device; and constructing a characteristic action change coordinate based on the real-time feedback result, and realizing real-time action capture of the first user according to the characteristic action change coordinate.
Having thus described the general principles of the present application, various non-limiting embodiments thereof will now be described in detail with reference to the accompanying drawings.
Example one
As shown in fig. 1, an embodiment of the present application provides a real-time virtual anchor motion capture method for a live stream, where the method is applied to a motion capture analysis processing system, the system is communicatively connected to a first light receiving device, and the method includes:
step S100: acquiring basic information of a first user;
specifically, the motion capture analysis processing system is a system for positioning, observing and processing a mark point acquired in real time, and performs coordinate processing after collecting feedback light of a MARKER point to determine a three-dimensional space coordinate system, and determines the position, direction and motion trajectory of a target point in space based on the three-dimensional coordinate system to realize real-time tracking of a target. The motion capture analysis processing system is in communication connection with the first light receiving device. Obtaining basic information of the first user, wherein the basic information includes but is not limited to financial information, personal information, budget invested, live broadcast equipment and the like of the first user.
Step S200: obtaining first budget information according to the basic information, wherein the first budget information is a total live broadcast cost budget of the first user;
step S300: obtaining first virtual character information of the first user, and performing MARKER point quantity analysis based on the first virtual character information to obtain a first MARKER point quantity analysis result;
specifically, budget information of a device of the first user preparing to perform live virtual character broadcasting is obtained based on the basic information of the first user, the device includes but is not limited to the computer of the first user, the device for real-time motion capture and processing, and the like, and based on the basic information of the first user, the information of the virtual character of the first user to be simulated is obtained, i.e., the first virtual character information, obtains MARKER point distribution information of the first virtual character information based on the first virtual character information, the MARKER point is a characteristic point for capturing characteristics of the user, the characteristic point is used for communicating the user and the virtual character, and controlling the action of the virtual character through the characteristic action of the user, and obtaining the quantity analysis result of the MARKER points of the first virtual character based on the information such as the complexity, the characteristic points and the like of the virtual character.
Step S400: obtaining information of a first device of the first user according to the first budget information;
step S500: obtaining a first analysis instruction, and performing quantity screening on the number of MARKER points according to the information of the first analysis instruction based on the first equipment to obtain a second MARKER point quantity analysis result;
specifically, the device information of the first user is obtained according to the budget information of the first user, the number of MARKER points supported by the device of the first user is analyzed based on the performance of the device information of the first user, that is, a MARKER point screening instruction is obtained according to the device processing capability of the first user, the analysis result of the number of the first MARKER points is further screened based on the instruction, and a second MARKER point number analysis result adapted to the device of the first user is obtained.
Step S600: obtaining a second analysis instruction, and performing motion capture importance degree analysis on the first virtual character information according to the second analysis instruction to obtain a first analysis result;
step S700: inputting the second MARKER point quantity analysis result and the first analysis result into a sampling point intelligent distribution model to obtain a first MARKER point distribution scheme;
specifically, after the virtual character of the first user is selected, the virtual character selected by the first user is analyzed according to the second analysis instruction, the importance degrees of the mark points of the first virtual character are sorted according to the importance degree of the mark points of the first virtual character, namely, the importance degrees are sorted to obtain a first analysis result, and the first analysis result and the second MARKER point number analysis result are input into a sampling point intelligent distribution model to obtain a distribution scheme of the MARKER points of the first virtual character adapted to the first user. Furthermore, the sampling point intelligent distribution model is obtained through training of a large amount of training data, supervised learning is carried out on the expression results of the quantity and distribution of MARKER points based on a large amount of same models, and the obtained model capable of analyzing and matching the MARKER points more accurately and rapidly is obtained. And a more accurate and suitable first MAREKR distribution scheme can be obtained based on the intelligent distribution model of the sampling points.
Step S800: obtaining a first execution instruction, and executing the first MARKER point distribution scheme on the first user according to the first execution instruction;
step S900: obtaining a real-time feedback result of the first MARKER point distribution scheme through the first light receiving device;
specifically, the first execution instruction is configured to execute the first MARKER point distribution scheme on the first user, implement construction of the motion capture environment of the first user based on the first MARKER point distribution scheme, and receive, through the first light receiving device, a real-time feedback result of the MARKER point of the first user according to the construction result.
Step S1000: and constructing a characteristic action change coordinate based on the real-time feedback result, and realizing real-time action capture of the first user according to the characteristic action change coordinate.
Specifically, the characteristic motion change coordinate formed by the real-time feedback result of the first user is obtained through real-time receiving and signal integration of the real-time feedback result, the real-time motion of the first user is captured based on the motion characteristic change coordinate, adaptive motion capture design is intelligently carried out according to information and requirements of the user, the motion capture effect is guaranteed on the basis of controlling the user cost, and the technical effect of improving the live broadcast experience of the user is achieved.
Further, the embodiment of the present application further includes:
step S1110: obtaining first MARKER point information through the first MARKER point distribution scheme;
step S1120: acquiring first image information through the first image acquisition device, wherein the first image information is image information containing the first MARKER point information;
step S1130: acquiring second image information through the second image acquisition device, wherein the second image information is image information containing the first MARKER point information, and the acquisition angles of the first image information and the second image information are different;
step S1140: obtaining the compensation coordinate of the first MARKER point according to the first image information and the second image information;
step S1150: obtaining a first anomaly analysis instruction, and performing anomaly analysis on the coordinate change of a first MARKER point in the characteristic action change coordinate according to the first anomaly analysis instruction to obtain a first anomaly analysis result;
step S1160: and adjusting the characteristic action change coordinate based on the first abnormal analysis result and the compensation coordinate to obtain a second characteristic action change coordinate, and realizing real-time action capture of the first user according to the second characteristic action change coordinate.
Specifically, in order to further ensure the accuracy of the motion capture, the system further includes a first image capture device and a second image capture device, which are in communication connection, wherein the first image capture device and the second image capture device are image capture devices with different capture angles, the image capture device is a device capable of capturing images in real time, and can also capture changes of single/multiple feature points according to different set feature points. Obtaining a first MARKER point in the first MARKER point distribution scheme, obtaining first image information including the first MARKER point through the first image acquisition device, obtaining second image information of the first MARKER point through the second image acquisition device, wherein the first image and the second image have different acquisition angles, obtaining a coordinate of the first MARKER point under the same coordinate with the characteristic action change coordinate based on the position information of the first MARKER point in the first image and the second image, performing abnormal analysis on a coordinate change value in the characteristic action change coordinate according to the first abnormal analysis instruction, obtaining an abnormal analysis result of the coordinate change of the first MARKER point in the coordinate change value, and adjusting the characteristic action change coordinate based on the abnormal analysis result and the compensation coordinate, and acquiring a second characteristic motion change coordinate, and capturing the motion of the first user in real time based on the second characteristic motion change coordinate, so that the technical effect of more accurate motion capture of the first user is achieved.
Further, the embodiment of the present application further includes:
step S1161: constructing a MARKER point coordinate change database according to the image sets acquired by the first image acquisition device and the second image acquisition device;
step S1162: performing abnormity comparison analysis on the characteristic action change coordinate through the MARKER point coordinate database according to the first abnormity analysis instruction;
step S1163: constructing a coordinate abnormal fluctuation curve based on time nodes according to the abnormal comparison analysis result;
step S1164: obtaining a coordinate abnormal fluctuation index changing along with a time node based on the coordinate abnormal fluctuation curve;
step S1165: acquiring a first preset coordinate abnormal fluctuation index threshold value;
step S1166: judging whether the coordinate abnormal fluctuation index meets the first preset coordinate abnormal fluctuation index threshold value or not;
step S1167: and when the coordinate abnormal fluctuation index meets the first preset coordinate abnormal fluctuation index threshold, the characteristic action change coordinate is not adjusted.
Specifically, the first image acquisition device and the second image acquisition device acquire images of all MARKER points in the first MARKER point distribution scheme in real time, a change database of MARKER point coordinates based on the image acquisition devices is constructed based on the image set acquired in real time, abnormal comparison analysis is performed on the coordinate characteristics of the MARKER point coordinate database and the characteristic action change coordinate database according to the first abnormal analysis instruction, a real-time coordinate abnormal fluctuation curve is constructed according to the abnormal comparison analysis result, namely a change curve of the coordinate difference value of each MARKER point along with time is constructed according to the real-time fluctuation size of the coordinates, a coordinate abnormal fluctuation index value is obtained according to the drawn change curve, and the coordinate abnormal fluctuation index can well reflect the fluctuation size of the coordinates under different time nodes, obtaining a first preset coordinate abnormal fluctuation index threshold value based on big data, comparing the coordinate abnormal fluctuation index in real time based on the first preset coordinate abnormal fluctuation index threshold value, when the coordinate abnormal fluctuation index meets the first preset coordinate abnormal fluctuation index threshold value, indicating that the difference between the acquired coordinate of the image acquisition device and the characteristic action change coordinate is within an acceptable range, and taking the characteristic action change coordinate as the standard, and not adjusting the characteristic action change coordinate.
Further, the step S1166 of determining whether the coordinate abnormal fluctuation index meets the first preset coordinate abnormal fluctuation index threshold further includes:
step S11661: when the coordinate fluctuation index does not meet the first preset coordinate abnormal fluctuation index threshold value, obtaining a third analysis instruction;
step S11662: performing coordinate analysis on the time node which does not meet the first preset coordinate abnormal fluctuation index threshold value according to the third analysis instruction to obtain a first analysis result;
step S11663: judging whether the real-time feedback result is interfered by an obstacle or not according to the first analysis result;
step S11664: and when the real-time feedback result is that the coordinate fluctuation index caused by the interference of the obstacle is abnormal, adjusting the characteristic action change coordinate according to the compensation coordinate to obtain a second characteristic action change coordinate.
Specifically, the coordinate information acquired by the image acquisition device is not accurate enough, but is relatively stable. When the coordinate fluctuation index does not meet the first preset coordinate abnormal fluctuation index, a third analysis instruction is obtained at the moment, coordinate analysis is carried out on the coordinate point which does not meet the first preset coordinate abnormal fluctuation index threshold value according to the third analysis instruction, the reason of the coordinate abnormality is analyzed, whether the obtaining process of the coordinate is interfered by an obstacle or not is judged according to real-time coordinate position information, when the action characteristic change coordinate is judged to be the real-time coordinate abnormality generated by the obstacle interference, the characteristic action change coordinate is adjusted in real time according to the compensation coordinate obtained by the image acquisition device at the moment, and the second characteristic action change coordinate is obtained. The coordinates acquired by the image acquisition device are used for real-time coordinate judgment and adjustment, so that the real-time accuracy of the characteristic action change coordinates is effectively guaranteed, real-time and timely adjustment of the action characteristics can be performed, and the technical effect of improving the accuracy of action capture is achieved.
Further, the obtaining of the first virtual character information of the first user, and performing MARKER point quantity analysis based on the first virtual character information to obtain a first MARKER point quantity analysis result, in step S300 of the embodiment of the present application, further include:
step S310: obtaining the first virtual character information;
step S320: obtaining a first evaluation instruction, and performing expression richness evaluation on the first virtual character information based on the first evaluation instruction to obtain a first expression richness evaluation result;
step S330: carrying out quantity analysis on the number of the MARK ER points based on the first expression abundance evaluation result to obtain expression abundance information under different MARKER point quantities;
step S340: and obtaining a first MARKER point quantity analysis result, wherein the first MARKER point quantity analysis result comprises different MARKER point quantities and expression abundance information lists under corresponding quantities.
Specifically, a first evaluation instruction is obtained from the virtual character information selected by the first user, the richness of the expression is evaluated on the basis of the first evaluation instruction, the expression richness refers to the fineness of the expression that the first virtual character can express, and further, the more the number of MARKER points is, the more dense the distribution is, the more exquisite the expression simulation is, the richer the expression is, evaluating the richness of the expressible expressions of the first virtual character according to the first virtual character information to obtain a first expression richness evaluation result, and obtaining the evaluation results of the expression richness corresponding to the MARKER points with different numbers and the evaluation results of the expression richness corresponding to different distributions with the same MARKER point number according to the number of the MARKER points with different numbers corresponding to the first expression richness evaluation result. And obtaining a list of corresponding expression abundance degrees under different MARKER point numbers and an information list of expression abundance degrees under the same MARKER point number according to the result, wherein the list is the analysis result of the first MARKER point number.
Further, step S500 in the embodiment of the present application further includes:
step S510: acquiring performance fluctuation information of the first equipment according to the information of the first equipment;
step S520: obtaining first working environment information of the first equipment;
step S530: estimating the stable processing capacity of the first equipment based on the first working environment information and the performance fluctuation information to obtain a first estimation result;
step S540: and performing quantity screening on the number of MARKER points based on the first estimation result to obtain a second MARKER point quantity analysis result.
Specifically, live device information of the first user is obtained, performance information of the first device is obtained based on the live device information, performance evaluation is conducted on the first device, performance fluctuation information of the first device is obtained based on the performance evaluation result, a live environment of the first user is obtained, the working environment of the first device is evaluated based on the live environment, stable processing capacity of the first device is evaluated according to the first working environment information and the performance fluctuation information, a first estimation result is obtained, the number of MARKER points is screened based on the first estimation result, the number of MARKER points with processing performance under the estimation result of the first device is screened, and the result is a second MARKER point number analysis result.
Further, the step S700 of the embodiment of the present application further includes inputting the second MARKER point quantity analysis result and the first analysis result into an intelligent distribution model of sampling points to obtain a first MARKER point distribution scheme:
step S710: constructing an intelligent sampling point distribution model, wherein the intelligent sampling point distribution model is obtained by training multiple groups of training data, and each group of the multiple groups of training data comprises the second MARKER point quantity analysis result, the first analysis result and identification information for identifying a MARKER point distribution scheme;
step S720: obtaining a first output result of the sampling point intelligent distribution model, wherein the first output result comprises the first MARKER point distribution scheme.
Specifically, the intelligent distribution model of the sampling points is a neural network model in machine learning, can be continuously learned and adjusted, and is a highly complex nonlinear dynamical learning system. In brief, the method is a mathematical model, and after the sampling point intelligent distribution model is trained to a convergence state through training of a large amount of training data, the MARKER point distribution scheme can be obtained through analysis of the sampling point intelligent distribution model based on input data.
Furthermore, the training process further includes a supervised learning process, each group of supervised data includes the second MARKER point number analysis result, the first analysis result and identification information identifying the MARKER point distribution scheme, the second MARKER point number analysis result and the first analysis result are input into the neural network model, the supervised learning is performed on the sampling point intelligent distribution model according to the identification information identifying the MARKER point distribution scheme, so that output data of the sampling point intelligent distribution model is consistent with the supervised data, continuous self-correction and adjustment are performed through the neural network model until the obtained output result is consistent with the identification information, the group of data supervised learning is ended, and the next group of data supervised learning is performed; and when the neural network model is in a convergence state, finishing the supervised learning process. Through supervised learning of the model, the model can process the input information more accurately, and a more accurate and reasonable MARKER point distribution scheme is obtained.
To sum up, the method and the system for capturing the action of the real-time virtual anchor of the live streaming provided by the embodiment of the application have the following technical effects:
1. the method comprises the steps of obtaining first budget information of a first user by obtaining basic information of the first user, carrying out MARKER point quantity analysis based on first virtual character information selected by the first user to obtain an analysis result of a first MARKER point quantity, obtaining information of a first device of the first user by the first budget information, carrying out quantity screening on the MARKER point quantity based on the first device by a first analysis instruction, selecting an adapted MARKER point quantity of the first user, obtaining a second analysis instruction, carrying out analysis on the importance degree of action support of the first virtual character according to the second analysis instruction to obtain a first analysis result, inputting a sampling point intelligent distribution model based on the first analysis result and the MARKER point quantity to obtain a first MARKER point distribution scheme, obtaining a first execution instruction, and executing the first MARKER point distribution scheme on the first user based on the first execution instruction, the real-time feedback result of the first MARKER point scheme is obtained based on the first light receiving device, the characteristic dynamic change coordinate is established based on the real-time feedback result, the real-time motion capture of the first user is realized through the characteristic motion change coordinate, the purpose of intelligently carrying out adaptive motion capture design according to the information and the requirement of the user is achieved, the motion capture effect is guaranteed on the basis of controlling the cost of the user, and the technical effect of improving the live broadcast experience of the user is achieved.
2. The method for adjusting the characteristic action change coordinate based on the abnormal analysis result and the compensation coordinate is adopted to obtain a second characteristic action change coordinate, and the first user is captured in real time based on the second characteristic action change coordinate, so that the technical effect of more accurately capturing the action of the first user is achieved.
3. The method of real-time coordinate judgment and adjustment of the coordinates acquired by the image acquisition device is adopted, so that the real-time accuracy of the characteristic action change coordinates is effectively guaranteed, real-time and timely adjustment of the action characteristics can be performed, and the technical effect of improving the accuracy of action capture is achieved.
Example two
Based on the same inventive concept as the real-time virtual anchor motion capture method for the live stream in the foregoing embodiment, the present invention further provides a real-time virtual anchor motion capture system for the live stream, as shown in fig. 2, the system includes:
a first obtaining unit 11, where the first obtaining unit 11 is configured to obtain basic information of a first user;
a second obtaining unit 12, where the second obtaining unit 12 is configured to obtain first budget information according to the basic information, where the first budget information is a total live cost budget of the first user;
a third obtaining unit 13, where the third obtaining unit 13 is configured to obtain first virtual character information of the first user, and perform MARKER point number analysis based on the first virtual character information to obtain a first MARKER point number analysis result;
a fourth obtaining unit 14, where the fourth obtaining unit 14 is configured to obtain information of the first device of the first user according to the first budget information;
a fifth obtaining unit 15, where the fifth obtaining unit 15 is configured to obtain a first analysis instruction, and perform quantity screening on MARKER point quantities according to the first analysis instruction based on the information of the first device to obtain a second MARKER point quantity analysis result;
a sixth obtaining unit 16, where the sixth obtaining unit 16 is configured to obtain a second analysis instruction, and perform motion capture importance degree analysis on the first avatar information according to the second analysis instruction to obtain a first analysis result;
the first input unit 17 is configured to input the second MARKER point quantity analysis result and the first analysis result into a sampling point intelligent distribution model to obtain a first MARKER point distribution scheme;
a seventh obtaining unit 18, said seventh obtaining unit 18 being configured to obtain a first execution instruction, according to which said first MARKER point distribution scheme is executed for said first user;
an eighth obtaining unit 19, where the eighth obtaining unit 19 is configured to obtain a real-time feedback result of the first MARKER point distribution scheme through a first light receiving device;
a first implementing unit 20, where the first implementing unit 20 is configured to construct a feature motion change coordinate based on the real-time feedback result, and implement real-time motion capture of the first user according to the feature motion change coordinate.
Further, the system further comprises:
a ninth obtaining unit for obtaining first MARKER point information by the first MARKER point distribution scheme;
a tenth obtaining unit, configured to obtain first image information through the first image acquisition device, where the first image information is image information that includes the first MARKER point information;
an eleventh obtaining unit, configured to obtain second image information by the second image capturing device, where the second image information is image information including the first MARKER point information, and capturing angles of the first image information and the second image information are different;
a twelfth obtaining unit, configured to obtain the compensation coordinates of the first MARKER point according to the first image information and the second image information;
a thirteenth obtaining unit, configured to obtain a first anomaly analysis instruction, perform anomaly analysis on the coordinate change of the first MARKER point in the feature action change coordinate according to the first anomaly analysis instruction, and obtain a first anomaly analysis result;
and the second implementation unit is used for adjusting the characteristic action change coordinate based on the first abnormal analysis result and the compensation coordinate to obtain a second characteristic action change coordinate, and implementing real-time action capture of the first user according to the second characteristic action change coordinate.
Further, the system further comprises:
the first construction unit is used for constructing a MARKER point coordinate change database according to the image sets acquired by the first image acquisition device and the second image acquisition device;
the first comparison unit is used for carrying out abnormal comparison analysis on the characteristic action change coordinate through the MARKER point coordinate database according to the first abnormal analysis instruction;
the second construction unit is used for constructing a coordinate abnormal fluctuation curve based on the time node according to the abnormal comparison analysis result;
a fourteenth obtaining unit configured to obtain a coordinate abnormal fluctuation index that varies with a time node based on the coordinate abnormal fluctuation curve;
a fifteenth obtaining unit, configured to obtain a first preset coordinate abnormal fluctuation index threshold;
the first judgment unit is used for judging whether the coordinate abnormal fluctuation index meets the first preset coordinate abnormal fluctuation index threshold value or not;
and the sixteenth obtaining unit is used for not adjusting the characteristic action change coordinate when the coordinate abnormal fluctuation index meets the first preset coordinate abnormal fluctuation index threshold value.
Further, the system further comprises:
a seventeenth obtaining unit, configured to obtain a third analysis instruction when the coordinate fluctuation index does not satisfy the first preset coordinate abnormal fluctuation index threshold;
an eighteenth obtaining unit, configured to perform coordinate analysis on the time node that does not meet the first preset coordinate abnormal fluctuation index threshold according to the third analysis instruction, and obtain a first analysis result;
the second judging unit is used for judging whether the real-time feedback result is interfered by an obstacle according to the first analysis result;
a nineteenth obtaining unit, configured to, when the real-time feedback result is that a coordinate fluctuation index caused by obstacle interference is abnormal, adjust the characteristic motion change coordinate according to the compensation coordinate, and obtain the second characteristic motion change coordinate.
Further, the system further comprises:
a twentieth obtaining unit configured to obtain the first virtual character information;
a twenty-first obtaining unit, configured to obtain a first evaluation instruction, perform expression richness evaluation on the first virtual character information based on the first evaluation instruction, and obtain a first expression richness evaluation result;
a twenty-second obtaining unit, configured to perform MARKER support point number analysis based on the first expression abundance evaluation result, and obtain expression abundance information under different MARKER point numbers;
a twenty-third obtaining unit, configured to obtain a result of analyzing the number of first MARKER points, where the result of analyzing the number of first MARKER points includes a list of expression abundance information for different numbers of MARKER points and corresponding numbers.
Further, the system further comprises:
a twenty-fourth obtaining unit configured to obtain performance fluctuation information of the first device according to the information of the first device;
a twenty-fifth obtaining unit, configured to obtain first working environment information of the first device;
a twenty-sixth obtaining unit, configured to estimate, based on the first operating environment information and the performance fluctuation information, a stable processing capability of the first device, and obtain a first estimation result;
a twenty-seventh obtaining unit, configured to perform quantity screening on the number of MARKER points based on the first estimation result, and obtain a second MARKER point quantity analysis result.
Further, the system further comprises:
the third construction unit is used for constructing an intelligent sampling point distribution model, wherein the intelligent sampling point distribution model is obtained by training multiple groups of training data, and each group of the multiple groups of training data comprises the second MARKER point quantity analysis result, the first analysis result and identification information for identifying a MARKER point distribution scheme;
a twenty-eighth obtaining unit, configured to obtain a first output result of the sampling point intelligent distribution model, where the first output result includes the first MARKER point distribution scheme.
Various changes and specific examples of the real-time virtual anchor motion capture method for a live stream in the first embodiment of fig. 1 are also applicable to the real-time virtual anchor motion capture system for a live stream in the present embodiment, and through the foregoing detailed description of the real-time virtual anchor motion capture method for a live stream, a person skilled in the art can clearly know the implementation method of the real-time virtual anchor motion capture system for a live stream in the present embodiment, so for the brevity of the description, detailed description is omitted here.
Exemplary electronic device
The electronic device of the embodiment of the present application is described below with reference to fig. 3.
Fig. 3 illustrates a schematic structural diagram of an electronic device according to an embodiment of the present application.
Based on the inventive concept of a real-time virtual anchor motion capture method for a live stream in the foregoing embodiments, the present invention further provides a real-time virtual anchor motion capture system for a live stream, on which a computer program is stored, which when executed by a processor implements the steps of any one of the foregoing real-time virtual anchor motion capture methods for a live stream.
Where in fig. 3 a bus architecture (represented by bus 300), bus 300 may include any number of interconnected buses and bridges, bus 300 linking together various circuits including one or more processors, represented by processor 302, and memory, represented by memory 304. The bus 300 may also link together various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further herein. A bus interface 305 provides an interface between the bus 300 and the receiver 301 and transmitter 303. The receiver 301 and the transmitter 303 may be the same element, i.e., a transceiver, providing a means for communicating with various other systems over a transmission medium.
The processor 302 is responsible for managing the bus 300 and general processing, and the memory 304 may be used for storing data used by the processor 302 in performing operations.
The embodiment of the invention provides a real-time virtual anchor motion capture method for a live stream, which is applied to a motion capture analysis processing system, wherein the system is in communication connection with a first light receiving device, and the method comprises the following steps: acquiring basic information of a first user; obtaining first budget information according to the basic information, wherein the first budget information is a total live broadcast cost budget of the first user; obtaining first virtual character information of the first user, and performing MARKER point quantity analysis based on the first virtual character information to obtain a first MARKER point quantity analysis result; obtaining information of a first device of the first user according to the first budget information; obtaining a first analysis instruction, and performing quantity screening on the number of MARKER points according to the information of the first analysis instruction based on the first equipment to obtain a second MARKER point quantity analysis result; obtaining a second analysis instruction, and performing motion capture importance degree analysis on the first virtual character information according to the second analysis instruction to obtain a first analysis result; inputting the second MARKER point quantity analysis result and the first analysis result into a sampling point intelligent distribution model to obtain a first MARKER point distribution scheme; obtaining a first execution instruction, and executing the first MARKER point distribution scheme on the first user according to the first execution instruction; obtaining a real-time feedback result of the first MARKER point distribution scheme through the first light receiving device; and constructing a characteristic action change coordinate based on the real-time feedback result, and realizing real-time action capture of the first user according to the characteristic action change coordinate. The problem of among the prior art carry out the in-process of real-time virtual motion capture, there can not be intelligence according to user's actual demand, carry out the motion capture design, and then lead to the technical problem of motion capture effect and user's ideal effect differentiation, reach intellectuality according to user's information and demand, carry out the motion capture design of adaptability, guarantee the motion capture effect on the basis of control user cost, reach the technological effect that improves user's live broadcast experience.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create a system for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including an instruction system which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks. While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.
Claims (9)
1. A real-time virtual anchor motion capture method for live streaming, wherein the method is applied to a motion capture analysis processing system, the system is connected with a first light receiving device in communication, and the method comprises:
acquiring basic information of a first user;
obtaining first budget information according to the basic information, wherein the first budget information is a total live broadcast cost budget of the first user;
obtaining first virtual character information of the first user, and performing MARKER point quantity analysis based on the first virtual character information to obtain a first MARKER point quantity analysis result;
obtaining information of a first device of the first user according to the first budget information;
obtaining a first analysis instruction, and performing quantity screening on the number of MARKER points according to the information of the first analysis instruction based on the first equipment to obtain a second MARKER point quantity analysis result;
obtaining a second analysis instruction, and performing motion capture importance degree analysis on the first virtual character information according to the second analysis instruction to obtain a first analysis result;
inputting the second MARKER point quantity analysis result and the first analysis result into a sampling point intelligent distribution model to obtain a first MARKER point distribution scheme;
obtaining a first execution instruction, and executing the first MARKER point distribution scheme on the first user according to the first execution instruction;
obtaining a real-time feedback result of the first MARKER point distribution scheme through the first light receiving device;
and constructing a characteristic action change coordinate based on the real-time feedback result, and realizing real-time action capture of the first user according to the characteristic action change coordinate.
2. The method of claim 1, wherein the system is communicatively coupled to a first image acquisition device, a second image acquisition device, the method further comprising:
obtaining first MARKER point information through the first MARKER point distribution scheme;
acquiring first image information through the first image acquisition device, wherein the first image information is image information containing the first MARKER point information;
acquiring second image information through the second image acquisition device, wherein the second image information is image information containing the first MARKER point information, and the acquisition angles of the first image information and the second image information are different;
obtaining the compensation coordinate of the first MARKER point according to the first image information and the second image information;
obtaining a first anomaly analysis instruction, and performing anomaly analysis on the coordinate change of a first MARKER point in the characteristic action change coordinate according to the first anomaly analysis instruction to obtain a first anomaly analysis result;
and adjusting the characteristic action change coordinate based on the first abnormal analysis result and the compensation coordinate to obtain a second characteristic action change coordinate, and realizing real-time action capture of the first user according to the second characteristic action change coordinate.
3. The method of claim 2, wherein the method further comprises:
constructing a MARKER point coordinate change database according to the image sets acquired by the first image acquisition device and the second image acquisition device;
performing abnormity comparison analysis on the characteristic action change coordinate through the MARKER point coordinate database according to the first abnormity analysis instruction;
constructing a coordinate abnormal fluctuation curve based on time nodes according to the abnormal comparison analysis result;
obtaining a coordinate abnormal fluctuation index changing along with a time node based on the coordinate abnormal fluctuation curve;
acquiring a first preset coordinate abnormal fluctuation index threshold value;
judging whether the coordinate abnormal fluctuation index meets the first preset coordinate abnormal fluctuation index threshold value or not;
and when the coordinate abnormal fluctuation index meets the first preset coordinate abnormal fluctuation index threshold, the characteristic action change coordinate is not adjusted.
4. The method of claim 3, wherein the determining whether the coordinate anomaly fluctuation index meets the first preset coordinate anomaly fluctuation index threshold value further comprises:
when the coordinate fluctuation index does not meet the first preset coordinate abnormal fluctuation index threshold value, obtaining a third analysis instruction;
performing coordinate analysis on the time node which does not meet the first preset coordinate abnormal fluctuation index threshold value according to the third analysis instruction to obtain a first analysis result;
judging whether the real-time feedback result is interfered by an obstacle or not according to the first analysis result;
and when the real-time feedback result is that the coordinate fluctuation index caused by the interference of the obstacle is abnormal, adjusting the characteristic action change coordinate according to the compensation coordinate to obtain a second characteristic action change coordinate.
5. The method of claim 1, wherein the obtaining first avatar information of the first user, performing MARKER point number analysis based on the first avatar information, obtaining first MARKER point number analysis results, further comprises:
obtaining the first virtual character information;
obtaining a first evaluation instruction, and performing expression richness evaluation on the first virtual character information based on the first evaluation instruction to obtain a first expression richness evaluation result;
carrying out support MARKER point number analysis based on the first expression abundance evaluation result to obtain expression abundance information under different MARKER point numbers;
and obtaining a first MARKER point quantity analysis result, wherein the first MARKER point quantity analysis result comprises different MARKER point quantities and expression abundance information lists under corresponding quantities.
6. The method of claim 1, wherein the method further comprises:
acquiring performance fluctuation information of the first equipment according to the information of the first equipment;
obtaining first working environment information of the first equipment;
estimating the stable processing capacity of the first equipment based on the first working environment information and the performance fluctuation information to obtain a first estimation result;
and performing quantity screening on the number of MARKER points based on the first estimation result to obtain a second MARKER point quantity analysis result.
7. The method of claim 1, wherein the inputting the second MARKER point number analysis result and the first analysis result into a sampling point intelligent distribution model to obtain a first MARKER point distribution scheme further comprises:
constructing an intelligent sampling point distribution model, wherein the intelligent sampling point distribution model is obtained by training multiple groups of training data, and each group of the multiple groups of training data comprises the second MARKER point quantity analysis result, the first analysis result and identification information for identifying a MARKER point distribution scheme;
obtaining a first output result of the sampling point intelligent distribution model, wherein the first output result comprises the first MARKER point distribution scheme.
8. A real-time virtual anchor motion capture system for a live stream, wherein the system comprises:
a first obtaining unit, configured to obtain basic information of a first user;
a second obtaining unit, configured to obtain first budget information according to the basic information, where the first budget information is a total live cost budget of the first user;
a third obtaining unit, configured to obtain first virtual character information of the first user, perform MARKER point number analysis based on the first virtual character information, and obtain a first MARKER point number analysis result;
a fourth obtaining unit, configured to obtain information of the first device of the first user according to the first budget information;
a fifth obtaining unit, configured to obtain a first analysis instruction, perform quantity screening on MARKER point quantities according to the first analysis instruction based on the information of the first device, and obtain a second MARKER point quantity analysis result;
a sixth obtaining unit, configured to obtain a second analysis instruction, perform motion capture importance degree analysis on the first avatar information according to the second analysis instruction, and obtain a first analysis result;
the first input unit is used for inputting the number analysis result of the second MARKER points and the first analysis result into a sampling point intelligent distribution model to obtain a first MARKER point distribution scheme;
a seventh obtaining unit, configured to obtain a first execution instruction, and execute the first MARKER point distribution scheme on the first user according to the first execution instruction;
an eighth obtaining unit, configured to obtain a real-time feedback result of the first MARKER point distribution scheme through a first light receiving device;
and the first implementation unit is used for constructing a characteristic action change coordinate based on the real-time feedback result and implementing real-time action capture of the first user according to the characteristic action change coordinate.
9. A real-time virtual anchor motion capture system for a live stream comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the steps of the method of any one of claims 1-7 when executing the program.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110587213.9A CN113505637A (en) | 2021-05-27 | 2021-05-27 | Real-time virtual anchor motion capture method and system for live streaming |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110587213.9A CN113505637A (en) | 2021-05-27 | 2021-05-27 | Real-time virtual anchor motion capture method and system for live streaming |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113505637A true CN113505637A (en) | 2021-10-15 |
Family
ID=78009238
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110587213.9A Pending CN113505637A (en) | 2021-05-27 | 2021-05-27 | Real-time virtual anchor motion capture method and system for live streaming |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113505637A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114972431A (en) * | 2022-05-27 | 2022-08-30 | 深圳市瑞立视多媒体科技有限公司 | Method, device and related equipment for matching rigid body mark points based on template graph |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104270716A (en) * | 2014-10-11 | 2015-01-07 | 曾毅峰 | Low-cost real-time motion capturing system based on WSN and acoustic positioning |
US20150310656A1 (en) * | 2012-11-22 | 2015-10-29 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Device, method and computer program for reconstructing a motion of an object |
CN105338369A (en) * | 2015-10-28 | 2016-02-17 | 北京七维视觉科技有限公司 | Method and apparatus for synthetizing animations in videos in real time |
US20170086712A1 (en) * | 2014-03-20 | 2017-03-30 | Telecom Italia S.P.A. | System and Method for Motion Capture |
CN107438183A (en) * | 2017-07-26 | 2017-12-05 | 北京暴风魔镜科技有限公司 | A kind of virtual portrait live broadcasting method, apparatus and system |
CN108830861A (en) * | 2018-05-28 | 2018-11-16 | 上海大学 | A kind of hybrid optical motion capture method and system |
CN109325450A (en) * | 2018-09-25 | 2019-02-12 | Oppo广东移动通信有限公司 | Image processing method, device, storage medium and electronic equipment |
CN110246583A (en) * | 2019-05-26 | 2019-09-17 | 江西中医药大学 | A kind of cerebral apoplexy cognitive training system and its operating method based on virtual reality technology |
CN111158482A (en) * | 2019-12-30 | 2020-05-15 | 华中科技大学鄂州工业技术研究院 | Human body motion posture capturing method and system |
CN112001394A (en) * | 2020-07-13 | 2020-11-27 | 上海翎腾智能科技有限公司 | Dictation interaction method, system and device based on AI vision |
CN112837339A (en) * | 2021-01-21 | 2021-05-25 | 北京航空航天大学 | Track drawing method and device based on motion capture technology |
-
2021
- 2021-05-27 CN CN202110587213.9A patent/CN113505637A/en active Pending
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150310656A1 (en) * | 2012-11-22 | 2015-10-29 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Device, method and computer program for reconstructing a motion of an object |
US20170086712A1 (en) * | 2014-03-20 | 2017-03-30 | Telecom Italia S.P.A. | System and Method for Motion Capture |
CN104270716A (en) * | 2014-10-11 | 2015-01-07 | 曾毅峰 | Low-cost real-time motion capturing system based on WSN and acoustic positioning |
CN105338369A (en) * | 2015-10-28 | 2016-02-17 | 北京七维视觉科技有限公司 | Method and apparatus for synthetizing animations in videos in real time |
CN107438183A (en) * | 2017-07-26 | 2017-12-05 | 北京暴风魔镜科技有限公司 | A kind of virtual portrait live broadcasting method, apparatus and system |
CN108830861A (en) * | 2018-05-28 | 2018-11-16 | 上海大学 | A kind of hybrid optical motion capture method and system |
CN109325450A (en) * | 2018-09-25 | 2019-02-12 | Oppo广东移动通信有限公司 | Image processing method, device, storage medium and electronic equipment |
CN110246583A (en) * | 2019-05-26 | 2019-09-17 | 江西中医药大学 | A kind of cerebral apoplexy cognitive training system and its operating method based on virtual reality technology |
CN111158482A (en) * | 2019-12-30 | 2020-05-15 | 华中科技大学鄂州工业技术研究院 | Human body motion posture capturing method and system |
CN112001394A (en) * | 2020-07-13 | 2020-11-27 | 上海翎腾智能科技有限公司 | Dictation interaction method, system and device based on AI vision |
CN112837339A (en) * | 2021-01-21 | 2021-05-25 | 北京航空航天大学 | Track drawing method and device based on motion capture technology |
Non-Patent Citations (5)
Title |
---|
CHRISTIAN OTT等: "Motion capture based human motion recognition and imitation by direct marker control", 《HUMANOIDS 2008 - 8TH IEEE-RAS INTERNATIONAL CONFERENCE ON HUMANOID ROBOTS》, pages 399 - 405 * |
TAMI GRIFFITH等: "Real-Time Motion Capture on a Budget", 《VAMR 2018: VIRTUAL, AUGMENTED AND MIXED REALITY: INTERACTION, NAVIGATION, VISUALIZATION, EMBODIMENT, AND SIMULATION 》, vol. 10909, pages 56 * |
王建平等: "基于MATLAB的人体膝关节运动捕捉测量与分析", 《河南理工大学学报(自然科学版)》, vol. 39, no. 03, pages 86 - 93 * |
白雪霏: "Helen Hayes标记组中关键标记点位置误差对下肢关节角度的影响", 《第十二届全国生物力学学术会议暨第十四届全国生物流变学学术会议会议论文摘要汇编》, pages 261 * |
竹可鉴: "基于运动捕捉的三维人脸表情合成研究", 《中国优秀硕士学位论文全文数据库 (信息科技辑)》, no. 01, pages 138 - 1319 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114972431A (en) * | 2022-05-27 | 2022-08-30 | 深圳市瑞立视多媒体科技有限公司 | Method, device and related equipment for matching rigid body mark points based on template graph |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107150347A (en) | Robot perception and understanding method based on man-machine collaboration | |
CN114519302B (en) | Highway traffic situation simulation method based on digital twinning | |
CN105809672B (en) | A kind of image multiple target collaboration dividing method constrained based on super-pixel and structuring | |
CN110363286A (en) | The generation method and device of neural network model | |
CN114819190A (en) | Model training method, device, system and storage medium based on federal learning | |
CN113936335B (en) | Intelligent sitting posture reminding method and device | |
CN113377880A (en) | Building model automatic matching method and system based on BIM | |
CN115135463A (en) | Prediction model learning method, device and system for industrial system | |
CN111026267A (en) | VR electroencephalogram idea control interface system | |
CN113359982A (en) | Virtual reality courseware making management method and system based on cloud platform | |
CN113505637A (en) | Real-time virtual anchor motion capture method and system for live streaming | |
CN110222734B (en) | Bayesian network learning method, intelligent device and storage device | |
US20230140696A1 (en) | Method and system for optimizing parameter intervals of manufacturing processes based on prediction intervals | |
CN108416483A (en) | RBF type teaching quality evaluation prediction techniques based on PSO optimizations | |
CN114970357A (en) | Energy-saving effect evaluation method, system, device and storage medium | |
CN110866609B (en) | Method, device, server and storage medium for acquiring interpretation information | |
CN110580483A (en) | indoor and outdoor user distinguishing method and device | |
CN112861809A (en) | Classroom new line detection system based on multi-target video analysis and working method thereof | |
CN113156473A (en) | Self-adaptive discrimination method for satellite signal environment of information fusion positioning system | |
CN117115567A (en) | Domain generalization image classification method, system, terminal and medium based on feature adjustment | |
CN111754589A (en) | Color matching method and device, computer equipment and storage medium | |
CN115987692A (en) | Safety protection system and method based on flow backtracking analysis | |
CN112232484B (en) | Space debris identification and capture method and system based on brain-like neural network | |
CN109493065A (en) | A kind of fraudulent trading detection method of Behavior-based control incremental update | |
CN113869186A (en) | Model training method and device, electronic equipment and computer readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
AD01 | Patent right deemed abandoned |
Effective date of abandoning: 20241008 |
|
AD01 | Patent right deemed abandoned |