[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN111428665B - Information determination method, equipment and computer readable storage medium - Google Patents

Information determination method, equipment and computer readable storage medium Download PDF

Info

Publication number
CN111428665B
CN111428665B CN202010241288.7A CN202010241288A CN111428665B CN 111428665 B CN111428665 B CN 111428665B CN 202010241288 A CN202010241288 A CN 202010241288A CN 111428665 B CN111428665 B CN 111428665B
Authority
CN
China
Prior art keywords
gesture
key point
special effect
key
matching degree
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010241288.7A
Other languages
Chinese (zh)
Other versions
CN111428665A (en
Inventor
李立锋
白保军
颜忠伟
王科
张健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
MIGU Video Technology Co Ltd
MIGU Culture Technology Co Ltd
Original Assignee
MIGU Video Technology Co Ltd
MIGU Culture Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by MIGU Video Technology Co Ltd, MIGU Culture Technology Co Ltd filed Critical MIGU Video Technology Co Ltd
Priority to CN202010241288.7A priority Critical patent/CN111428665B/en
Publication of CN111428665A publication Critical patent/CN111428665A/en
Application granted granted Critical
Publication of CN111428665B publication Critical patent/CN111428665B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Artificial Intelligence (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an information determining method, information determining equipment and a computer readable storage medium, which relate to the technical field of video processing and are used for accurately reflecting the special effect of the gesture of a user. The method comprises the following steps: acquiring an image of a target object; extracting information of the gesture of the target object from the image; determining the matching degree between the gesture of the target object and a preset gesture according to the gesture information; and determining the special effect intensity of the special effect of the gesture according to the matching degree. The embodiment of the invention can accurately embody the special effect of the gesture of the user.

Description

Information determination method, equipment and computer readable storage medium
Technical Field
The present invention relates to the field of video processing technologies, and in particular, to an information determining method, an information determining device, and a computer readable storage medium.
Background
The user may simulate gestures (e.g., gestures, actions, etc.) in the video, and then present special effects based on the user's simulation. However, in the prior art, all the gestures correspond to the same special effects. Therefore, the special effect of the gesture of the user cannot be accurately reflected by the method in the prior art.
Disclosure of Invention
The embodiment of the invention provides an information determining method, information determining equipment and a computer readable storage medium, so as to accurately embody the special effect of the gesture of a user.
In a first aspect, an embodiment of the present invention provides an information determining method, including:
acquiring an image of a target object;
extracting information of the gesture of the target object from the image;
determining the matching degree between the gesture of the target object and a preset gesture according to the gesture information;
and determining the special effect intensity of the special effect of the gesture according to the matching degree.
Wherein the extracting the information of the pose of the target object from the image includes:
extracting first gesture key points from the image, wherein the number of the first gesture key points is at least three;
calculating an included angle between a first connecting line and a second connecting line for a first key point, a second key point and a third key point in the first gesture key points;
the first key point, the second key point and the third key point are three key points which are adjacent to each other in any sequence in the first gesture key; the second keypoint is located between the first keypoint and the third keypoint;
the first connecting line is a connecting line between the second key point and the first key point, and the second connecting line is a connecting line between the second key point and the third key point.
The determining the matching degree between the gesture of the target object and the preset gesture according to the gesture information comprises the following steps:
determining a second gesture key point of the preset gesture;
for a first included angle in the included angles, calculating a second matching degree between the first included angle and a second included angle in the preset gesture;
normalizing the obtained at least one second matching degree to obtain the matching degree between the gesture of the target object and the preset gesture;
the second included angle is an included angle between a third connecting line and a fourth connecting line, the third connecting line is a connecting line between a fourth key point and a fifth key point, the fourth connecting line is a connecting line between the fifth key point and a sixth key point, and the fourth key point, the fifth key point and the sixth key point are three key points which are sequentially adjacent in the second gesture key point and respectively correspond to the three key points forming the first included angle in the first gesture key point;
the fifth keypoint is located between the fourth keypoint and the sixth keypoint.
The determining the matching degree between the gesture of the target object and the preset gesture according to the gesture information comprises the following steps:
determining a second gesture key point of the preset gesture;
the size of the target object in the image of the target object is adjusted to be consistent with the size of the preset gesture;
for a seventh key point in the first gesture key points, determining a corresponding eighth key point in the second gesture key points, and adjusting the gesture of the target object so that the seventh key point and the eighth key point are overlapped;
for a ninth key point in the first gesture key points, determining a corresponding tenth key point in the second gesture key points, and adjusting the gesture of the target object so that the distance between the ninth key point and the tenth key point is minimum;
respectively calculating the distance between a first target key point in the first gesture key points and a second target key point in the preset gesture; the first target key point is any key point in the first gesture key points, and the second target key point is any key point in the key points of the preset gesture and corresponds to the first target key point;
and normalizing the obtained at least one distance to obtain the matching degree between the gesture of the target object and the preset gesture.
Wherein, the determining the special effect intensity of the special effect of the gesture according to the matching degree includes:
for the target special effect parameters corresponding to the special effect effects, utilizing the sum of the minimum parameter value and the first value corresponding to the target special effect parameters as the special effect intensity of the target special effect parameters;
the first value is the product of the difference between the maximum parameter value and the minimum parameter value corresponding to the target special effect parameter and the matching degree.
Wherein the information of the gesture comprises information of at least one sub-gesture that is continuous in time;
the determining the matching degree between the gesture of the target object and the preset gesture comprises the following steps:
according to the information of the at least one sub-gesture, the matching degree between the at least one sub-gesture and the preset gesture is respectively determined, and at least one matching degree is obtained;
and processing the at least one matching degree by using a dynamic time warping algorithm DTW, and taking a processing result as the matching degree between the gesture of the target object and a preset gesture.
Wherein the method further comprises:
and normalizing the special effect parameters of the special effect.
Wherein after the specific intensity of the specific effect is determined according to the matching degree, the method further comprises:
displaying the special effect with the special effect intensity.
In a second aspect, an embodiment of the present invention further provides an information determining apparatus, including: the information determining device comprises a memory, a processor and a program stored on the memory and capable of running on the processor, wherein the processor realizes the steps in the information determining method when executing the program.
In a third aspect, embodiments of the present invention also provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps in the information determination method as described above.
In the embodiment of the invention, matching is carried out between the gesture of the target object and the preset gesture to obtain the matching degree, and then the special effect intensity of the special effect corresponding to the gesture of the target object is determined according to the matching degree. Therefore, in the embodiment of the invention, the matching condition between the gesture of the target object and the preset gesture can be distinguished to determine the special effect intensity of the special effect, so that the special effect of the gesture of the user can be accurately reflected by utilizing the scheme of the embodiment of the invention.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments of the present invention will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort to a person of ordinary skill in the art.
FIG. 1 is a flow chart of an information determination method provided by an embodiment of the present invention;
FIG. 2 is a schematic diagram of key points of a human body according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a key point of a human body according to an embodiment of the present invention;
fig. 4 is a block diagram of an information determining apparatus provided by an embodiment of the present invention;
fig. 5 is a block diagram of an information determining apparatus provided in an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, fig. 1 is a flowchart of an information determining method provided in an embodiment of the present invention, as shown in fig. 1, including the following steps:
step 101, acquiring an image of a target object.
The target object may be a human, or may be another object, such as an animal. In the embodiment of the invention, the image of the target object, such as a 2D image or a 3D image, can be shot through the camera.
And 102, extracting the information of the gesture of the target object from the image.
In the image of the target object, taking the target object as an example, a person may perform a certain action, so as to present different postures. The information of the gesture can be embodied by gesture key points and included angles formed by the gesture key points.
In practical application, the gesture key points of the target object in the image can be detected according to information such as the type of actions by a human skeleton key point detection algorithm. Typically, the pose key points are articular points on joints, such as wrist joints, elbow joints, shoulder joints, and the like. There may be one or more key points on each joint.
Specifically, in this step, first gesture key points are extracted from the image, wherein the number of the first gesture key points is at least three. Wherein the first gesture keypoints may be located on different joints.
Then, the included angles formed by any three key points which are adjacent in sequence are calculated. Here, sequentially adjacent means that three key points form an arrangement in which the order of the three key points is relatively fixed. For example, in the human body, key points a, B, and C are respectively located on the shoulder joint, elbow joint, and wrist joint in the order from the head to the foot. Since there is a clear relative positional relationship between the shoulder joint, elbow joint, and hand wrist joint, the three points a, B, and C can be considered to be adjacent in sequence.
Specifically, when calculating the included angle, calculating the included angle between the first connecting line and the second connecting line for the first key point, the second key point and the third key point in the first gesture key points. The first key point, the second key point and the third key point are three key points which are adjacent to each other in any sequence in the first gesture key; the first connecting line is a connecting line between the second key point and the first key point, and the second connecting line is a connecting line between the second key point and the third key point. The second key point is located between the first key point and the third key point, that is, the location where the second key point is located, and between the location where the first key point is located and the location where the third key point is located.
As shown in fig. 2, the shoulder joint, elbow joint, and wrist joint have key points J (Jx, jy, jz), E (Ex, ey, ez), and W (Wx, wy, wz), respectively. J. The E, W points correspond to the first, second and third key points, respectively, or the W, E, J points correspond to the first, second and third key points, respectively. The connection line between EJ may be regarded as a first connection line and the connection line between EW may be regarded as a second connection line. Or vice versa.
Then, the included angle between these three key points is calculated as follows:
wherein,vector representing the connection between keypoint E and keypoint J, +.>Vector representing the connection between keypoint E and keypoint W, +.>A cosine value representing the angle between the two vectors. And obtaining the included angle between the two connecting lines EJ and EW through the cosine value.
In this step, the included angle formed by any three key points adjacent in sequence can be calculated.
And step 103, determining the matching degree between the gesture of the target object and a preset gesture according to the gesture information.
Since the magnitude of the angle may represent the magnitude of the motion amplitude, the degree of match between two motions or poses may be determined based on the match between the angles. If the matching degree meets a preset requirement, for example, if the matching degree is greater than a certain preset value, the special effect corresponding to the gesture of the target object can be triggered.
Wherein, the preset gesture can be regarded as the gesture of some predefined standard actions. Standard actions may include: human body limb movements, finger movements, facial movements, and the like. Wherein, the standard action triggering a certain type of special effect can be set according to the special effect type. For example, the sound effect of the batting corresponds to the batting. That is, if a ball striking motion is detected, a sound special effect of the ball striking may be triggered. The images of these preset poses may be stored in advance. Meanwhile, key point information in the gestures and included angle information formed by the key points can be stored.
After the image of the target object is acquired, image recognition can be performed according to information such as application scenes of the target object, so that an image which can be used for matching is found out from the stored images. For example, for a user image obtained in a baseball game, an image for matching may be selected from a library of pre-stored images related to baseball actions.
In the embodiment of the invention, the matching degree between the gesture of the target object and the preset gesture can be determined in at least two ways.
In one form, the method may comprise the steps of:
step 1031a, determining a second gesture key point of the preset gesture.
As shown in fig. 3, skeletal points of a human body are classified into three categories: points 303, 305, 309, 411, 413, etc. on the joints on the left side of the human body, points 304, 306, 410, 412, 414, etc. on the joints on the right side of the human body, and key points 301, 302 on the head. Typically, the key points refer to points on the left and right joints of the human body. Here, according to the aforementioned human skeleton key point detection algorithm, the second gesture key points in the preset gesture may be determined, where the number of the second gesture key points is at least three.
The second gesture keypoints may be pre-labeled.
Step 1031b, for a first included angle of the included angles, calculating a second matching degree between the first included angle and a second included angle of the preset gesture.
The second included angle is an included angle between a third connecting line and a fourth connecting line, the third connecting line is a connecting line between a fourth key point and a fifth key point, the fourth connecting line is a connecting line between the fifth key point and a sixth key point, and the fourth key point, the fifth key point and the sixth key point are three key points which are sequentially adjacent in the second gesture key point and respectively correspond to the three key points forming the first included angle in the first gesture key point;
the fifth keypoint is located between the fourth keypoint and the sixth keypoint.
If the first included angle is determined based on a first key point, a second key point and a third key point in the first gesture key points, a fourth key point, a fifth key point and a sixth key point in the second gesture key points for calculating the second included angle are key points corresponding to the first key point, the second key point and the third key point respectively.
For example, the first, second, third and third keypoints are the left shoulder, left elbow, left wrist, in that order, and then the fourth, fifth and sixth keypoints are the left shoulder, left elbow, left wrist, in that order, too.
In the embodiment of the invention, the magnitude of the included angle formed by three sequentially adjacent key points can be calculated according to the calculation method of the angles according to the sequence from head to foot or from foot to head. Alternatively, the magnitude of the included angle may be pre-calculated. Here, the second angle calculated in advance may be obtained.
Wherein in this way the degree of matching between the first and second angles may be represented by the difference of the two angles. The smaller the absolute value of the difference, the closer the two angles are indicated.
Step 1031c, normalizing the obtained at least one second matching degree to obtain the matching degree between the gesture of the target object and the preset gesture.
In the embodiment of the invention, an error range is set for each included angle of the preset gesture. And if the matching degree of the first included angle and the second included angle is within the corresponding error range, the first included angle and the second included angle are considered to be matched, otherwise, the first included angle and the second included angle are not considered to be matched.
In the step, normalizing the obtained at least one second matching degree to obtain the matching degree between the gesture of the target object and the preset gesture. The matching degree is a number greater than or equal to 0 and less than or equal to 1.
In another way, the following steps may be included:
step 1032a, determining a second gesture key point of the preset gesture.
The description of this step may be referred to the description of step 1031a previously described.
Step 1032b, adjusting the size of the target object in the image of the target object to be consistent with the size of the preset gesture.
Here, the size of the target object and the size of the preset gesture may be normalized, so that the size of the target object in the image of the target object is adjusted to be consistent with the size of the preset gesture.
Step 1032c, for a seventh key point of the first gesture key points, determining a corresponding eighth key point of the second gesture key points, and adjusting the gesture of the target object so that the seventh key point and the eighth key point overlap.
Wherein the seventh key point may be any one key point. Typically, the seventh key point may be a key point on the leg, or a first key point in a direction from the foot toward the head.
In this step, the pose (e.g., 3D pose) of the target object is adjusted centering on the seventh key point such that the seventh key point and the eighth key point coincide. The eighth key point is a key point located at the same position as the seventh key point on the preset gesture.
Step 1032d, for a ninth key point of the first gesture key points, determining a corresponding tenth key point of the second gesture key points, and adjusting the gesture of the target object so that a distance between the ninth key point and the tenth key point is the smallest.
The ninth key point may be a key point located on a human body part above a part where the seventh key point is located, and adjacent to the seventh key point in the gesture of the target object.
Step 1032e, calculating the distance between the first target key point in the first gesture key point and the second target key point in the preset gesture. The first target key point is any key point in the first gesture key points, and the second target key point is any key point in the key points of the preset gesture and corresponds to the first target key point.
That is, for the key points in the first posture key point and the second posture key point, the straight line distance between any two corresponding key points is calculated.
And 1032f, carrying out normalization processing on the obtained at least one distance to obtain the matching degree between the gesture of the target object and the preset gesture.
Also, a distance range may be set for each keypoint. If the distance between the first target key point and the second target key point is in the corresponding distance range, the first target key point and the second target key point are considered to be matched; otherwise, the two may be considered to be mismatched.
In the step, normalizing the obtained at least one distance degree to obtain the matching degree between the gesture of the target object and the preset gesture. The matching degree is a number greater than or equal to 0 and less than or equal to 1.
And 104, determining the special effect intensity of the special effect of the gesture according to the matching degree.
In this step, for the target special effect parameter corresponding to the special effect, the sum of the minimum parameter value and the first value corresponding to the target special effect parameter is used as the special effect intensity of the target special effect parameter. The first value is the product of the difference between the maximum parameter value and the minimum parameter value corresponding to the target special effect parameter and the matching degree. The target special effect parameter may be, for example, brightness, contrast, size, speed, frequency, etc., and the special effect may be, for example, sound, light, etc. The special effect intensity refers to the magnitude of a certain special effect parameter in the special effect, such as the magnitude of sound of the special effect, the intensity of lamplight, the magnitude of action speed and the like.
In the embodiment of the invention, in order to make the determined special effect intensity more accurate, special effect parameters of the special effect can be normalized, for example, brightness, contrast, size, speed, frequency and the like are normalized respectively. The specific normalization value range of a specific parameter is as follows: and the product of the difference value and the matching degree between the maximum effect parameter of the special effect parameter and the weakest effect parameter of the special effect parameter.
Taking the special effect as an example, the minimum special effect, the maximum special effect and the normalized special effect intensity corresponding to the special effect parameters are shown in the following table 1:
TABLE 1
Through the above table 1, after the matching degree is obtained, the special effect intensity corresponding to a certain parameter can be calculated. The more the key points are matched, the stronger the special effect.
In the embodiment of the invention, matching is carried out between the gesture of the target object and the preset gesture to obtain the matching degree, and then the special effect intensity of the special effect corresponding to the gesture of the target object is determined according to the matching degree. Therefore, in the embodiment of the invention, the matching condition between the gesture of the target object and the preset gesture can be distinguished to determine the special effect intensity of the special effect, so that the special effect of the gesture of the user can be accurately reflected by utilizing the scheme of the embodiment of the invention.
In addition, the special effect with the special effect intensity can be displayed, so that a user can know the matching degree of actions conveniently. Or, the special effect with the special effect intensity can be displayed under the condition that the obtained matching degree meets the preset requirement. The preset requirement may be, for example, that the matching degree is greater than a certain value, and the value may be set according to needs.
In practice, the pose of the target object may last for a period of time or be made up of multiple poses that are continuous in time. Then, correspondingly, the information of the gesture comprises information of at least one sub-gesture that is continuous in time. Therefore, when the matching degree is determined, in order to make the obtained special effect intensity more accurate, the matching degree between the at least one sub-gesture and the preset gesture can be respectively determined according to the information of the at least one sub-gesture, so as to obtain at least one matching degree. The manner of determining the matching degree may refer to the description of the foregoing embodiment. And then, the at least one matching degree is processed by using a DTW (Dynamic Time Warping, dynamic time warping algorithm), and the processing result is used as the matching degree between the gesture of the target object and a preset gesture. In this way, the matching degree corresponding to each sub-gesture in the continuous gesture change process can be considered, so that an intermediate matching degree value is obtained.
In the above embodiment, the key points and the included angles formed between the key points may also be set for a certain standard gesture. The key points are selected to determine the intensity of the special effect, that is, the special effect can be triggered after the matching of the key points meets a certain condition, and the intensity of the special effect is affected. For example, if the user imitates the Sunwukong to start the turtle wave qigong and the foot and waist actions reach a certain matching degree, the special effect is triggered; the closer the palm motion is to the set motion, the stronger the generated qigong wave effect.
The embodiment of the invention also provides an information determining device. Referring to fig. 4, fig. 4 is a block diagram of an information determining apparatus provided in an embodiment of the present invention. Since the principle of solving the problem by the information determining apparatus is similar to that of the information determining method in the embodiment of the present invention, the implementation of the information determining apparatus may refer to the implementation of the method, and the repetition is not repeated.
As shown in fig. 4, the information determining apparatus 400 includes:
a first acquiring module 401, configured to acquire an image of a target object; a first extraction module 402, configured to extract information of the pose of the target object from the image; a first determining module 403, configured to determine, according to the information of the gesture, a matching degree between the gesture of the target object and a preset gesture; and a second determining module 404, configured to determine, according to the matching degree, a special effect intensity of the special effect of the gesture.
Optionally, the first extraction module 402 may include:
the first extraction sub-module is used for extracting first gesture key points from the image, wherein the number of the first gesture key points is at least three; the first computing sub-module is used for computing an included angle between a first connecting line and a second connecting line for a first key point, a second key point and a third key point in the first gesture key points; the first key point, the second key point and the third key point are three key points which are adjacent to each other in any sequence in the first gesture key; the second keypoint is located between the first keypoint and the third keypoint; the first connecting line is a connecting line between the second key point and the first key point, and the second connecting line is a connecting line between the second key point and the third key point.
Optionally, the first determining module 403 may include:
the first determining submodule is used for determining a second gesture key point of the preset gesture; the first calculating sub-module is used for calculating a second matching degree between a first included angle and a second included angle in the preset postures for the first included angle; the first acquisition sub-module is used for carrying out normalization processing on the obtained at least one second matching degree to obtain the matching degree between the gesture of the target object and the preset gesture; the second included angle is an included angle between a third connecting line and a fourth connecting line, the third connecting line is a connecting line between a fourth key point and a fifth key point, the fourth connecting line is a connecting line between the fifth key point and a sixth key point, and the fourth key point, the fifth key point and the sixth key point are three key points which are sequentially adjacent in the second gesture key point and respectively correspond to the three key points forming the first included angle in the first gesture key point;
the fifth keypoint is located between the fourth keypoint and the sixth keypoint.
Optionally, the first determining module 403 may include:
the second determining submodule is used for determining a second gesture key point of the preset gesture; the first adjusting sub-module is used for adjusting the size of the target object in the image of the target object to be consistent with the size of the preset gesture; the second adjustment sub-module is used for determining a corresponding eighth key point in the second gesture key points for a seventh key point in the first gesture key points, and adjusting the gesture of the target object so that the seventh key point and the eighth key point are overlapped; a third adjustment sub-module, configured to determine, for a ninth key point of the first gesture key points, a corresponding tenth key point of the second gesture key points, and adjust a gesture of the target object so that a distance between the ninth key point and the tenth key point is minimum; the second computing sub-module is used for respectively computing the distance between a first target key point in the first gesture key points and a second target key point in the preset gesture; the first target key point is any key point in the first gesture key points, and the second target key point is any key point in the key points of the preset gesture and corresponds to the first target key point; and the second acquisition sub-module is used for carrying out normalization processing on the obtained at least one distance to obtain the matching degree between the gesture of the target object and the preset gesture.
Optionally, the second determining module 404 is specifically configured to, for a target special effect parameter corresponding to the special effect, use a sum of a minimum parameter value corresponding to the target special effect parameter and the first value as the special effect intensity of the target special effect parameter; the first value is the product of the difference between the maximum parameter value and the minimum parameter value corresponding to the target special effect parameter and the matching degree.
Optionally, the information of the gesture includes information of at least one sub-gesture that is continuous in time; the first determining module 403 may include:
the third determining submodule is used for respectively determining the matching degree between the at least one sub-gesture and the preset gesture according to the information of the at least one sub-gesture to obtain at least one matching degree; and a fourth determining submodule, configured to process the at least one matching degree by using DTW, and use a processing result as a matching degree between the gesture of the target object and a preset gesture.
Optionally, the apparatus may further include:
and the processing module is used for carrying out normalization processing on the special effect parameters of the special effect.
Optionally, the apparatus may further include: displaying the special effect with the special effect intensity.
The device provided by the embodiment of the present invention may execute the above method embodiment, and its implementation principle and technical effects are similar, and this embodiment will not be described herein.
As shown in fig. 5, the information determining apparatus of the embodiment of the present invention includes: the processor 500, configured to read the program in the memory 520, performs the following procedures:
acquiring an image of a target object;
extracting information of the gesture of the target object from the image;
determining the matching degree between the gesture of the target object and a preset gesture according to the gesture information;
and determining the special effect intensity of the special effect of the gesture according to the matching degree.
Wherein in fig. 5, a bus architecture may comprise any number of interconnected buses and bridges, and in particular one or more processors represented by processor 500 and various circuits of memory represented by memory 520, linked together. The bus architecture may also link together various other circuits such as peripheral devices, voltage regulators, power management circuits, etc., which are well known in the art and, therefore, will not be described further herein. The bus interface provides an interface. The processor 500 is responsible for managing the bus architecture and general processing, and the memory 520 may store data used by the processor 500 in performing operations.
The processor 500 is responsible for managing the bus architecture and general processing, and the memory 520 may store data used by the processor 500 in performing operations.
The processor 500 is further configured to read the program and perform the following steps:
extracting first gesture key points from the image, wherein the number of the first gesture key points is at least three;
calculating an included angle between a first connecting line and a second connecting line for a first key point, a second key point and a third key point in the first gesture key points;
the first key point, the second key point and the third key point are three key points which are adjacent to each other in any sequence in the first gesture key; the second keypoint is located between the first keypoint and the third keypoint;
the first connecting line is a connecting line between the second key point and the first key point, and the second connecting line is a connecting line between the second key point and the third key point.
The processor 500 is further configured to read the program and perform the following steps:
determining a second gesture key point of the preset gesture;
for a first included angle in the included angles, calculating a second matching degree between the first included angle and a second included angle in the preset gesture;
normalizing the obtained at least one second matching degree to obtain the matching degree between the gesture of the target object and the preset gesture;
the second included angle is an included angle between a third connecting line and a fourth connecting line, the third connecting line is a connecting line between a fourth key point and a fifth key point, the fourth connecting line is a connecting line between the fifth key point and a sixth key point, and the fourth key point, the fifth key point and the sixth key point are three key points which are sequentially adjacent in the second gesture key point and respectively correspond to the three key points forming the first included angle in the first gesture key point; the fifth keypoint is located between the fourth keypoint and the sixth keypoint.
The processor 500 is further configured to read the program and perform the following steps:
determining a second gesture key point of the preset gesture;
the size of the target object in the image of the target object is adjusted to be consistent with the size of the preset gesture;
for a seventh key point in the first gesture key points, determining a corresponding eighth key point in the second gesture key points, and adjusting the gesture of the target object so that the seventh key point and the eighth key point are overlapped;
for a ninth key point in the first gesture key points, determining a corresponding tenth key point in the second gesture key points, and adjusting the gesture of the target object so that the distance between the ninth key point and the tenth key point is minimum;
respectively calculating the distance between a first target key point in the first gesture key points and a second target key point in the preset gesture; the first target key point is any key point in the first gesture key points, and the second target key point is any key point in the key points of the preset gesture and corresponds to the first target key point;
and normalizing the obtained at least one distance to obtain the matching degree between the gesture of the target object and the preset gesture. The processor 500 is further configured to read the program and perform the following steps:
the processor 500 is further configured to read the program and perform the following steps:
for the target special effect parameters corresponding to the special effect effects, utilizing the sum of the minimum parameter value and the first value corresponding to the target special effect parameters as the special effect intensity of the target special effect parameters;
the first value is the product of the difference between the maximum parameter value and the minimum parameter value corresponding to the target special effect parameter and the matching degree.
The information of the gesture comprises information of at least one sub-gesture which is continuous in time; the processor 500 is further configured to read the program and perform the following steps:
according to the information of the at least one sub-gesture, the matching degree between the at least one sub-gesture and the preset gesture is respectively determined, and at least one matching degree is obtained;
and processing the at least one matching degree by using a dynamic time warping algorithm DTW, and taking a processing result as the matching degree between the gesture of the target object and a preset gesture.
The processor 500 is further configured to read the program and perform the following steps:
and normalizing the special effect parameters of the special effect.
The processor 500 is further configured to read the program and perform the following steps:
displaying the special effect with the special effect intensity.
The device provided by the embodiment of the present invention may execute the above method embodiment, and its implementation principle and technical effects are similar, and this embodiment will not be described herein.
The embodiment of the invention also provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor, implements the respective processes of the above-mentioned information determining method embodiment, and can achieve the same technical effects, and in order to avoid repetition, the description is omitted here. Wherein the computer readable storage medium is selected from Read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), magnetic disk or optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. In light of such understanding, the technical solutions of the present invention may be embodied essentially or in part in the form of a software product stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal (which may be a cell phone, computer, server, air conditioner, or network device, etc.) to perform the methods described in the various embodiments of the present invention.
The embodiments of the present invention have been described above with reference to the accompanying drawings, but the present invention is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present invention and the scope of the claims, which are to be protected by the present invention.

Claims (8)

1. An information determination method, comprising:
acquiring an image of a target object;
extracting information of the pose of the target object from the image, including: extracting first gesture key points from the image, wherein the number of the first gesture key points is at least three; calculating an included angle between a first connecting line and a second connecting line for a first key point, a second key point and a third key point in the first gesture key points; the first key point, the second key point and the third key point are three key points which are adjacent to each other in any sequence in the first gesture key; the second keypoint is located between the first keypoint and the third keypoint; the first connecting line is a connecting line between the second key point and the first key point, and the second connecting line is a connecting line between the second key point and the third key point;
determining the matching degree between the gesture of the target object and a preset gesture according to the gesture information;
according to the matching degree, determining the special effect intensity of the special effect of the gesture comprises the following steps: for the target special effect parameters corresponding to the special effect effects, utilizing the sum of the minimum parameter value and the first value corresponding to the target special effect parameters as the special effect intensity of the target special effect parameters; the first value is the product of the difference between the maximum parameter value and the minimum parameter value corresponding to the target special effect parameter and the matching degree.
2. The method according to claim 1, wherein determining the matching degree between the pose of the target object and the preset pose according to the pose information comprises:
determining a second gesture key point of the preset gesture;
for a first included angle in the included angles, calculating a second matching degree between the first included angle and a second included angle in the preset gesture;
normalizing the obtained at least one second matching degree to obtain the matching degree between the gesture of the target object and the preset gesture;
the second included angle is an included angle between a third connecting line and a fourth connecting line, the third connecting line is a connecting line between a fourth key point and a fifth key point, the fourth connecting line is a connecting line between the fifth key point and a sixth key point, and the fourth key point, the fifth key point and the sixth key point are three key points which are sequentially adjacent in the second gesture key point and respectively correspond to the three key points forming the first included angle in the first gesture key point;
the fifth keypoint is located between the fourth keypoint and the sixth keypoint.
3. The method according to claim 1, wherein determining the matching degree between the pose of the target object and the preset pose according to the pose information comprises:
determining a second gesture key point of the preset gesture;
the size of the target object in the image of the target object is adjusted to be consistent with the size of the preset gesture;
for a seventh key point in the first gesture key points, determining a corresponding eighth key point in the second gesture key points, and adjusting the gesture of the target object so that the seventh key point and the eighth key point are overlapped;
for a ninth key point in the first gesture key points, determining a corresponding tenth key point in the second gesture key points, and adjusting the gesture of the target object so that the distance between the ninth key point and the tenth key point is minimum;
respectively calculating the distance between a first target key point in the first gesture key points and a second target key point in the preset gesture; the first target key point is any key point in the first gesture key points, and the second target key point is any key point in the key points of the preset gesture and corresponds to the first target key point;
and normalizing the obtained at least one distance to obtain the matching degree between the gesture of the target object and the preset gesture.
4. The method of claim 1, wherein the information of the gesture comprises information of at least one sub-gesture that is continuous in time;
the determining the matching degree between the gesture of the target object and the preset gesture comprises the following steps:
according to the information of the at least one sub-gesture, the matching degree between the at least one sub-gesture and the preset gesture is respectively determined, and at least one matching degree is obtained;
and processing the at least one matching degree by using a dynamic time warping algorithm DTW, and taking a processing result as the matching degree between the gesture of the target object and a preset gesture.
5. The method according to claim 1, wherein the method further comprises:
and normalizing the special effect parameters of the special effect.
6. The method of claim 1, wherein after said determining the special effect intensity of the special effect based on said degree of matching, said method further comprises:
displaying the special effect with the special effect intensity.
7. An information determining apparatus, comprising: a memory, a processor, and a program stored on the memory and executable on the processor; -characterized in that the processor is arranged to read a program in a memory for implementing the steps in the information determining method according to any one of claims 1 to 6.
8. A computer-readable storage medium storing a computer program, characterized in that the computer program, when executed by a processor, implements the steps in the information determination method according to any one of claims 1 to 7.
CN202010241288.7A 2020-03-30 2020-03-30 Information determination method, equipment and computer readable storage medium Active CN111428665B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010241288.7A CN111428665B (en) 2020-03-30 2020-03-30 Information determination method, equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010241288.7A CN111428665B (en) 2020-03-30 2020-03-30 Information determination method, equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN111428665A CN111428665A (en) 2020-07-17
CN111428665B true CN111428665B (en) 2024-04-12

Family

ID=71551754

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010241288.7A Active CN111428665B (en) 2020-03-30 2020-03-30 Information determination method, equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN111428665B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109147023A (en) * 2018-07-27 2019-01-04 北京微播视界科技有限公司 Three-dimensional special efficacy generation method, device and electronic equipment based on face
CN109462776A (en) * 2018-11-29 2019-03-12 北京字节跳动网络技术有限公司 A kind of special video effect adding method, device, terminal device and storage medium
CN110113523A (en) * 2019-03-15 2019-08-09 深圳壹账通智能科技有限公司 Intelligent photographing method, device, computer equipment and storage medium
CN110297929A (en) * 2019-06-14 2019-10-01 北京达佳互联信息技术有限公司 Image matching method, device, electronic equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109918975B (en) * 2017-12-13 2022-10-21 腾讯科技(深圳)有限公司 Augmented reality processing method, object identification method and terminal

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109147023A (en) * 2018-07-27 2019-01-04 北京微播视界科技有限公司 Three-dimensional special efficacy generation method, device and electronic equipment based on face
CN109462776A (en) * 2018-11-29 2019-03-12 北京字节跳动网络技术有限公司 A kind of special video effect adding method, device, terminal device and storage medium
CN110113523A (en) * 2019-03-15 2019-08-09 深圳壹账通智能科技有限公司 Intelligent photographing method, device, computer equipment and storage medium
CN110297929A (en) * 2019-06-14 2019-10-01 北京达佳互联信息技术有限公司 Image matching method, device, electronic equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
傅鹏 ; 孙世君 ; 王琨 ; 金元 ; 蔡汉辉 ; 孙权森 ; 朱近 ; .图像MTF对立体定位测量精度影响的仿真研究.系统仿真学报.2013,(05),全文. *
罗会兰 ; 冯宇杰 ; 孔繁胜 ; .融合多姿势估计特征的动作识别.中国图象图形学报.2015,(11),全文. *

Also Published As

Publication number Publication date
CN111428665A (en) 2020-07-17

Similar Documents

Publication Publication Date Title
CN108875524B (en) Sight estimation method, device, system and storage medium
US9330470B2 (en) Method and system for modeling subjects from a depth map
CN108230383B (en) Hand three-dimensional data determination method and device and electronic equipment
JP5715833B2 (en) Posture state estimation apparatus and posture state estimation method
US9262674B2 (en) Orientation state estimation device and orientation state estimation method
US20180321776A1 (en) Method for acting on augmented reality virtual objects
JP7015152B2 (en) Processing equipment, methods and programs related to key point data
CN111191599A (en) Gesture recognition method, device, equipment and storage medium
CN108304819B (en) Gesture recognition system and method, and storage medium
Rhee et al. Human hand modeling from surface anatomy
CN110163113B (en) Human behavior similarity calculation method and device
CN111222379A (en) Hand detection method and device
EP2899706B9 (en) Method and system for analyzing human behavior in an intelligent surveillance system
JP5503510B2 (en) Posture estimation apparatus and posture estimation program
CN109740511A (en) A kind of human face expression matching process, device, equipment and storage medium
CN111368787A (en) Video processing method and device, equipment and computer readable storage medium
CN112418153B (en) Image processing method, device, electronic equipment and computer storage medium
CN111428665B (en) Information determination method, equipment and computer readable storage medium
KR20230036458A (en) Device and Method for Evaluation Posture of User
JP2019133331A (en) Image recognition apparatus, image recognition method, and image recognition program
JP2023527627A (en) Inference of joint rotation based on inverse kinematics
JP2020198019A (en) Method, device and program for skeleton extraction
KR101543150B1 (en) Apparatus for gesture recognition and method thereof
Huang et al. A skeleton-occluded repair method from Kinect
Putz-Leszczynska et al. Gait biometrics with a Microsoft Kinect sensor

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant