[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN113657278A - Motion gesture recognition method, device, equipment and storage medium - Google Patents

Motion gesture recognition method, device, equipment and storage medium Download PDF

Info

Publication number
CN113657278A
CN113657278A CN202110948521.XA CN202110948521A CN113657278A CN 113657278 A CN113657278 A CN 113657278A CN 202110948521 A CN202110948521 A CN 202110948521A CN 113657278 A CN113657278 A CN 113657278A
Authority
CN
China
Prior art keywords
human body
target
video image
target human
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110948521.XA
Other languages
Chinese (zh)
Inventor
卢星宇
杨磊
王天宝
魏华
尹宇芳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu University of Information Technology
Chengdu Technological University CDTU
Chengdu Univeristy of Technology
Original Assignee
Chengdu University of Information Technology
Chengdu Technological University CDTU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu University of Information Technology, Chengdu Technological University CDTU filed Critical Chengdu University of Information Technology
Priority to CN202110948521.XA priority Critical patent/CN113657278A/en
Publication of CN113657278A publication Critical patent/CN113657278A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application provides a motion gesture recognition method, a motion gesture recognition device, equipment and a storage medium, wherein the method comprises the following steps: for each frame of video image in the video file, determining a target human body region which is located in the central region of the video image and has the largest area in at least one human body region included in the video image; inputting the target human body area in the video image into a preset feature extraction model to obtain the position coordinates of each joint point of the target human body in the target human body area; judging whether the geometric relation between the position coordinates of the joint points meets the preset relation condition of the standard motion posture or not; if the preset relation condition is met, displaying the target human body area in the video image; by the method, influence on the normal movement process of the movement personnel is avoided, the manual workload is reduced, and the movement gesture recognition efficiency is improved.

Description

Motion gesture recognition method, device, equipment and storage medium
Technical Field
The present application relates to the field of motion gesture recognition, and in particular, to a motion gesture recognition method, apparatus, device, and storage medium.
Background
Along with the continuous improvement of the life quality of people, more and more people pay attention to the health, more and more people do exercises, but the exercise posture that is not standard not only can reduce the body-building effect, still causes the injury of health joint easily.
In order to help the sportsman to adjust the motion gesture, the motion gesture of the sportsman is mainly detected through the mode of wearing professional equipment at present, and whether the professional is standard according to experience judgement this motion gesture again, but wears professional equipment and can reduce sportsman's comfort level, influences sportsman's normal motion, and along with sportsman's continuous increase, the artificial mode of judging standard motion gesture can make artificial work load great.
Disclosure of Invention
In view of this, embodiments of the present application provide a motion gesture recognition method, apparatus, device, and storage medium, which are beneficial to avoiding affecting the normal motion process of a motion worker, and are beneficial to reducing the workload of workers and improving the recognition efficiency of a motion gesture.
Mainly comprises the following aspects:
in a first aspect, an embodiment of the present application provides a motion gesture recognition method, where the method includes:
aiming at each frame of video image in a video file, determining a target human body area which is located in the central area of the video image and has the largest area in at least one human body area included in the video image, wherein the video file is a video recorded with target motion, and the human body area is an area containing a single human body;
inputting the target human body area in the video image into a preset feature extraction model to obtain position coordinates of each joint point of a target human body in the target human body area, wherein each position coordinate is located in a coordinate system taking a designated position in the video image as a coordinate origin;
judging whether the geometric relation between the position coordinates of the joint points meets the preset relation condition of the standard motion posture or not;
and if the preset relation condition is met, displaying the target human body area in the video image.
Optionally, when the target motion is push-up, the joint points of the target human body include: buttock, shoulder, ankle, wrist, elbow and knee joint, the geometric relation satisfies predetermine the relation condition, include:
the geometric relationship satisfies at least one of the following conditions:
a difference value between a vertical coordinate value of the hip and a target value is within a first preset error range, wherein the target value is equal to an average value of a vertical coordinate value of the shoulder and a vertical coordinate value of the ankle, and the hip, the shoulder and the ankle are located on the same side of the target human body;
the difference value between the bending angle of the arm of the target human body and the first preset angle is within a second preset error range, wherein the bending angle of the arm is obtained through calculation according to the position coordinate of the wrist, the position coordinate of the elbow and the position coordinate of the shoulder, and the arm, the wrist, the elbow and the shoulder are located on the same side of the target human body;
the difference value between the bending angle of the leg of the target human body and the second preset angle is within a third preset error range, wherein the bending angle of the leg is calculated according to the position coordinate of the hip, the position coordinate of the knee joint and the position coordinate of the ankle, and the leg, the hip, the knee joint and the ankle are located on the same side of the target human body.
Optionally, the method further includes:
for a first video image and a second video image of every two adjacent frames, acquiring a first position coordinate of a target joint point of a first target human body in the first video image and acquiring a second position coordinate of a target joint point of a second target human body in the second video image, wherein when the target movement is push-up, the target joint point comprises a hip and/or a shoulder;
calculating a speed value of the target motion by using a preset calculation formula according to the first position coordinate, the second position coordinate and a target time interval, wherein the target time interval is a time interval between the first video image and the second video image;
and displaying the speed value.
Optionally, before the target human body region in the video image is input into a preset feature extraction model, the method further includes:
and carrying out network pruning on the feature extraction model.
Optionally, after the displaying the target human body region in the video image, the method further includes:
and adding a label of a standard motion posture to the target human body area.
In a second aspect, an embodiment of the present application provides a motion gesture recognition apparatus, including:
the device comprises a determining module, a judging module and a judging module, wherein the determining module is used for determining a target human body area which is located in the central area of a video image and has the largest area in at least one human body area contained in the video image aiming at each frame of video image in a video file, the video file is a video recorded with target motion, and the human body area is an area containing a single human body;
the input module is used for inputting the target human body area in the video image into a preset feature extraction model to obtain position coordinates of each joint point of a target human body in the target human body area, wherein each position coordinate is positioned in a coordinate system taking a designated position in the video image as a coordinate origin;
the judging module is used for judging whether the geometric relation between the position coordinates of the joint points meets the preset relation condition of the standard motion posture or not;
and the first display module is used for displaying the target human body area in the video image if the preset relation condition is met.
Optionally, when the target motion is push-up, the joint points of the target human body include: buttock, shoulder, ankle, wrist, elbow and knee joint, the geometric relation satisfies predetermine the relation condition, include:
the geometric relationship satisfies at least one of the following conditions:
a difference value between a vertical coordinate value of the hip and a target value is within a first preset error range, wherein the target value is equal to an average value of a vertical coordinate value of the shoulder and a vertical coordinate value of the ankle, and the hip, the shoulder and the ankle are located on the same side of the target human body;
the difference value between the bending angle of the arm of the target human body and the first preset angle is within a second preset error range, wherein the bending angle of the arm is obtained through calculation according to the position coordinate of the wrist, the position coordinate of the elbow and the position coordinate of the shoulder, and the arm, the wrist, the elbow and the shoulder are located on the same side of the target human body;
the difference value between the bending angle of the leg of the target human body and the second preset angle is within a third preset error range, wherein the bending angle of the leg is calculated according to the position coordinate of the hip, the position coordinate of the knee joint and the position coordinate of the ankle, and the leg, the hip, the knee joint and the ankle are located on the same side of the target human body.
Optionally, the motion gesture recognition apparatus further includes:
the device comprises an acquisition module, a display module and a control module, wherein the acquisition module is used for acquiring a first position coordinate of a target joint point of a first target human body in a first video image and a second position coordinate of a target joint point of a second target human body in a second video image aiming at a first video image and a second video image of every two adjacent frames, and the target joint point comprises a hip and/or a shoulder when the target motion is push-up;
a calculating module, configured to calculate a speed value of the target motion according to the first position coordinate, the second position coordinate, and a target time interval using a preset calculation formula, where the target time interval is a time interval between the first video image and the second video image;
and the second display module is used for displaying the speed value.
Optionally, before the input module is configured to input the target human body region in the video image into a preset feature extraction model, the input module is further configured to: and carrying out network pruning on the feature extraction model.
Optionally, after the configuration of the first display module is used to display the target human body region in the video image, the configuration of the first display module is further used to: and adding a label of a standard motion posture to the target human body area.
In a third aspect, an embodiment of the present application provides a computer device, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor implements the steps of the motion gesture recognition method according to any one of the above first aspects when executing the computer program.
In a fourth aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, where the computer program is executed by a processor to perform the steps of the motion gesture recognition method according to any one of the above first aspects.
The technical scheme provided by the embodiment of the application can have the following beneficial effects:
the motion gesture recognition method provided by the embodiment of the application uses the video file for recording the target motion of the sporter to detect the motion gesture so as not to influence the normal motion of the sporter, in the process of recording the sports personnel, other irrelevant personnel may be recorded, each personnel is equivalent to a human body, one human body occupies one human body area, so that, for each frame of video image comprised by the video file, the video image comprises at least one body region, in order to improve the detection efficiency of the motion posture, the method only needs to input the human body area where the motion personnel is located (namely, the target human body area) into the feature extraction model, therefore, the target human body region needs to be determined in at least one human body region included in the video image based on the characteristics of the region where the moving person is located (namely, the region is located in the center of the video image and has the largest area); after the target human body area is input into the feature extraction model, the position coordinates of each joint point of the target human body (the moving person) in the target human body area output by the feature extraction model can be obtained, in order to judge whether the movement posture of the moving person is standard, a preset relation condition set according to the standard movement posture of the target movement needs to be used, whether the obtained geometric relation between the position coordinates of each joint point of the moving person meets the preset relation condition is judged, if the preset relation condition is met, the movement posture of the moving person is standard, and therefore the target human body area in the video image is displayed for the moving person to refer to.
In the process, the position coordinates (namely the movement posture) of each joint point of the sportsman in the movement process are detected by using the video file recorded with the target movement of the sportsman, so that the sportsman can wear comfortable sports clothes to move, and the normal movement process of the sportsman is prevented from being influenced; in addition, the preset relation condition set for the standard motion posture is used as the judgment condition, so that the server can judge whether the motion posture of the motion personnel is standard or not according to the geometric relation and the preset relation condition between the position coordinates of all the joint points of the motion personnel, the process does not need manual participation, the manual workload is favorably reduced, and the recognition efficiency of the motion posture is improved.
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 is a flowchart illustrating a motion gesture recognition method according to an embodiment of the present application;
FIG. 2 is a diagram illustrating an example of the position coordinates of the joint points of a target human body according to an embodiment of the present application;
FIG. 3 is a flow chart of another motion gesture recognition method provided in an embodiment of the present application;
fig. 4 is a schematic structural diagram illustrating a motion gesture recognition apparatus provided in a second embodiment of the present application;
fig. 5 shows a schematic structural diagram of a computer device provided in the third embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
The embodiment of the application provides a motion gesture recognition method, a motion gesture recognition device, motion gesture recognition equipment and a storage medium, which are described through the embodiment.
Example one
Fig. 1 shows a flowchart of a motion gesture recognition method provided in an embodiment of the present application, and as shown in fig. 1, the method includes the following steps:
step S101: aiming at each frame of video image in a video file, determining a target human body area which is located in the central area of the video image and has the largest area in at least one human body area included in the video image, wherein the video file is a video recorded with target motion, and the human body area is an area containing a single human body.
Specifically, when a moving object performs target motion, if the motion posture of the moving object is detected to be standard under the condition that the normal motion of the moving object is not influenced, a video file of the moving object performing the target motion needs to be acquired, wherein the target motion comprises the motions of sit-up, push-up, pull-up, yoga and the like; the video file may be a video obtained by acquiring the moving object in advance to perform the target motion by using a camera when the moving object performs the target motion, or may be a video obtained by acquiring the moving object in real time to perform the target motion by using a network camera, which is not specifically limited herein.
The obtained video file comprises at least one frame of video image, the video images are arranged according to the sequence from front to back, and for each frame of video image, because the pixels shot by the current video acquisition equipment are higher, and the video acquisition area is larger than the area where the moving object is located, at least one character object may exist in the video image, namely: there is at least one human body region (region where a human body object is located), each human body region being identified in the video image using ssd (single Shot multi box detector) algorithm, for each human body region, there is only one human body object in the human body region, that is: a single human body.
When the number of the human body regions identified in the video image is greater than 1, in order to reduce the calculation amount of a subsequent feature extraction model and improve the efficiency of the feature extraction model, one target human body region in which only the moving object exists needs to be screened out from each human body region, and based on the fact that when the moving object is shot, the moving object needs to be placed in the center of the video and occupies most of the shooting area, when the target human body region is screened, the target region needs to be screened according to a screening principle that the area of the target human body region is the largest and the center region of the video image is located.
Step S102: and inputting the target human body area in the video image into a preset feature extraction model to obtain position coordinates of each joint point of the target human body in the target human body area, wherein each position coordinate is positioned in a coordinate system taking a designated position in the video image as a coordinate origin.
Specifically, for the video image, in order to reduce the calculation amount of the feature extraction model, after a target human body region in the video image is determined, only the target human body region is input into the feature extraction model, and the position coordinates of each joint point of the target human body in the target human body region output by the feature extraction model are obtained, in this process, the target human body joint point refers to a human body joint point of a moving object included in the target human body region, such as: a left wrist, a right wrist, a left shoulder, a right shoulder, a left elbow, a right elbow, a left hip, a right hip, a left knee joint, a right knee joint, a left ankle, a right ankle; the coordinate system of the position coordinates of the joint point is based on the designated position in the video image as the origin of coordinates, for example: the lower left corner of the video image.
It should be noted that the feature extraction model may be a lightweight MobileNet-V2 model, an OpenPose model, or an AlphaPose model, which is not specifically limited herein.
Step S103: and judging whether the geometric relation between the position coordinates of the joint points meets the preset relation condition of the standard motion posture.
Step S104: and if the preset relation condition is met, displaying the target human body area in the video image.
Specifically, after the position coordinates of each joint point of the target human body in the target human body region are acquired, whether the motion posture of the target human body in the target human body region is standard needs to be judged according to the position coordinates, the preset relation condition is preset according to the position relation of each human body joint point in the standard motion posture, and the geometric relation between the position coordinates of each joint point can represent the position relation between each joint point of the target human body.
In the motion gesture recognition method provided in figure one, in order not to affect the normal motion of the moving person, the video file recording the target motion of the moving person is used for detecting the motion gesture, in the process of recording the sports personnel, other irrelevant personnel may be recorded, each personnel is equivalent to a human body, one human body occupies one human body area, so that, for each frame of video image comprised by the video file, the video image comprises at least one body region, in order to improve the detection efficiency of the motion posture, the method only needs to input the human body area where the motion personnel is located (namely, the target human body area) into the feature extraction model, therefore, the target human body region needs to be determined in at least one human body region included in the video image based on the characteristics of the region where the moving person is located (namely, the region is located in the center of the video image and has the largest area); after the target human body area is input into the feature extraction model, the position coordinates of each joint point of the target human body (the moving person) in the target human body area output by the feature extraction model can be obtained, in order to judge whether the movement posture of the moving person is standard, a preset relation condition set according to the standard movement posture of the target movement needs to be used, whether the obtained geometric relation between the position coordinates of each joint point of the moving person meets the preset relation condition is judged, if the preset relation condition is met, the movement posture of the moving person is standard, and therefore the target human body area in the video image is displayed for the moving person to refer to.
In the process, the position coordinates (namely the movement posture) of each joint point of the sportsman in the movement process are detected by using the video file recorded with the target movement of the sportsman, so that the sportsman can wear comfortable sports clothes to move, and the normal movement process of the sportsman is prevented from being influenced; in addition, the preset relation condition set for the standard motion posture is used as the judgment condition, so that the server can judge whether the motion posture of the motion personnel is standard or not according to the geometric relation and the preset relation condition between the position coordinates of all the joint points of the motion personnel, the process does not need manual participation, the manual workload is favorably reduced, and the recognition efficiency of the motion posture is improved.
In another possible embodiment, before identifying the human body region using the SSD algorithm, the motion gesture identification method further includes: the SSD algorithm described above was trained using the PASCAL VOC dataset carrying human body region labels.
In a possible embodiment, when the target motion is push-up, the joint points of the target human body include: hip, shoulder, ankle, wrist, elbow and knee joint, the geometrical relationship between the position coordinates of the respective joint points satisfies the above-mentioned preset relationship condition when the position coordinates of the respective joint points satisfy at least one of the following three conditions:
the first condition is as follows: the difference between the ordinate value of buttock and the target value is in first preset error range, wherein, the target value equals the ordinate value of shoulder with the average value of the ordinate value of ankle, the buttock, the shoulder with the ankle is located same one side of target human body.
Specifically, the buttocks includes a left buttocks portion located at a left side of the target human body and a right buttocks portion located at a right side of the target human body, the shoulder portions include a left shoulder portion located at a left side of the target human body and a right shoulder portion located at a right side of the target human body, the ankle includes a left ankle located at a left side of the target human body and a right ankle located at a right side of the target human body, and the target value includes the first target value and the second target value.
The geometric relationship satisfies the first condition, which specifically includes: a difference between the ordinate value of the left hip and a first target value is within a first predetermined error range, and a difference between the ordinate value of the right hip and a second target value is within a first predetermined error range, the first target value being equal to an average of the ordinate value of the left shoulder and the ordinate value of the left ankle, the second target value being equal to an average of the ordinate value of the right shoulder and the ordinate value of the right ankle.
It should be noted that, the first preset error range refers to a value that an absolute value of the difference cannot exceed, and the difference is within the first preset error range, that is: the absolute value of the difference is less than or equal to the value representing the first predetermined error range.
And a second condition: the difference between the bend angle of the arm of the target human body and the first preset angle is within the second preset error range, wherein the bend angle of the arm is calculated according to the position coordinate of the wrist, the position coordinate of the elbow and the position coordinate of the shoulder, and the arm, the wrist, the elbow and the shoulder are located on the same side of the target human body.
Specifically, the arms include a left arm positioned on the left side of the target body and a right arm positioned on the right side of the target body, the wrists include a left wrist positioned on the left side of the target body and a right wrist positioned on the right side of the target body, the elbows include a left elbow positioned on the left side of the target body and a right elbow positioned on the right side of the target body, and the shoulders include a left shoulder and a right shoulder.
The geometric relationship satisfies the second condition, which specifically includes: the difference between the bending angle of the left arm and the first preset angle is within a second preset error range, the difference between the bending angle of the right arm and the first preset angle is within a second preset error range, and the bending angle of the left arm is calculated according to a preset calculation formula according to the position coordinate of the left wrist, the position coordinate of the left elbow and the position coordinate of the left shoulder; the bending angle of the right arm is calculated according to a preset first calculation formula based on the position coordinate of the right wrist, the position coordinate of the right elbow, and the position coordinate of the right shoulder.
The value of the first preset angle may be set according to actual conditions, for example, may be set to 90 °, and the specific setting of the first preset angle is not particularly limited herein.
It should be noted that, for the description of the second predetermined error range, refer to the description of the first predetermined error range, which is not repeated herein.
And (3) carrying out a third condition: the difference value between the bending angle of the leg of the target human body and the second preset angle is within a third preset error range, wherein the bending angle of the leg is calculated according to the position coordinate of the hip, the position coordinate of the knee joint and the position coordinate of the ankle, and the leg, the hip, the knee joint and the ankle are located on the same side of the target human body.
Specifically, the leg portions include a left leg portion positioned on a left side of the target human body and a right leg portion positioned on a right side of the target human body, the hip portions include a left hip portion and a right hip portion, the knee joints include a left knee joint positioned on the left side of the target human body and a right knee joint positioned on the right side of the target human body, and the ankle portions include a left ankle portion and a right ankle portion.
The geometric relationship satisfies the third condition, which specifically includes: the difference value between the bending angle of the left leg and the second preset angle is within a third preset error range, the difference value between the bending angle of the right leg and the second preset angle is within a third preset error range, and the bending angle of the left leg is calculated according to a preset second calculation formula and the position coordinate of the left hip, the position coordinate of the left knee joint and the position coordinate of the left ankle; the bending angle of the right leg is calculated according to a preset second calculation formula based on the position coordinate of the right hip, the position coordinate of the right knee joint, and the position coordinate of the right ankle.
The value of the second preset angle may be set according to actual conditions, for example, may be set to 180 °, and the specific value is not specifically limited herein.
It should be noted that, for a specific description of the third preset error range, refer to the description of the first preset error range, and are not described herein again.
By way of example to illustrate the above three conditions, fig. 2 shows an exemplary diagram of the position coordinates of the joints of the target human body provided in the first embodiment of the present application, and as shown in fig. 2, the position coordinates of the joints of the target human body in fig. 2 are respectively the left wrist (x)w1,yw1) Right wrist (x)w2,yw2) Left elbow (x)e1,ye1) Right elbow (x)e2,ye2) Left shoulder (x)s1,ys1) Right shoulder (x)s2,ys2) Left hip (x)h1,yh1) Right hip (x)h2,yh2) Left knee joint (x)k1,yk1) Right knee joint (x)k2,yk2) Left ankle (x)a1,ya1) Right ankle (x)a2,ya2) Setting: the first predetermined error range is z, and the second predetermined error range is theta1The third predetermined error range is θ2The first preset angle is 90 degrees, and the second preset angle is 180 degrees.
When the position coordinates of the hip, the shoulder and the ankle satisfy: | yh1-(ys1+ya1) Z is less than or equal to 2 and yh2-(ys2+ya2) When z is less than or equal to 2|, the geometrical condition meets the condition I.
For the above condition two, the formula is used:
Figure BDA0003217735150000131
calculating the bending angle of the left armA cosine value, then using a cosine value angle comparison table to determine a first angle value corresponding to the first cosine value, so as to use the first angle value as the bending angle E of the left arm1The method of calculating the bending angle of the left arm can be referred to obtain the bending angle E of the right arm2When the bending angle of the arm satisfies: i < E |)1-90°|≤θ1And | < E2-90°|≤θ1When the geometric condition is satisfied, the condition two is satisfied.
For the above condition three, the formula is used:
Figure BDA0003217735150000132
calculating a second cosine value of the bending angle of the left leg, and then determining a second angle value corresponding to the second cosine value by using a cosine value angle comparison table so as to use the second angle value as the bending angle k of the left leg1The method of calculating the bending angle of the left leg portion can be referred to obtain the bending angle k of the right leg portion2When the bending angle of the leg portion satisfies: i < k1-180°|≤θ2And | < k2-180°|≤θ2When the geometric condition is satisfied, the condition three is satisfied.
By the mode, the standard motion gesture is used as the preset relation condition through mathematical definition, and whether the motion gesture of the target human body belongs to the standard motion gesture or not is judged through the preset relation condition.
In a possible implementation, fig. 3 shows a flowchart of another motion gesture recognition method provided in an embodiment of the present application, and as shown in fig. 3, the motion gesture recognition method further includes the following steps:
step S301: and aiming at a first video image and a second video image of every two adjacent frames, acquiring a first position coordinate of a target joint point of a first target human body in the first video image and acquiring a second position coordinate of a target joint point of a second target human body in the second video image, wherein when the target movement is push-up, the target joint point comprises a hip and/or a shoulder.
Step S302: and calculating the speed value of the target motion by using a preset calculation formula according to the first position coordinate, the second position coordinate and a target time interval, wherein the target time interval is the time interval between the first video image and the second video image.
Step S303: and displaying the speed value.
Specifically, for each video file, the video file is arranged by at least one frame of video images according to a time sequence from first to last, the time intervals between every two adjacent frames of video images are the same, and for the first video image and the second video image of every two adjacent frames, the frame number of the first video image is smaller than that of the second video image; for a first video image, a first target human body in the first video image is located in a first target human body area in the video image, and for a second video image, a second target human body in the second video image is located in a second target human body area in the video image.
In order to determine the movement speed of the moving object corresponding to the target human body in the target movement in the time interval between the first video image and the second video image, first, a first position coordinate of a target joint point of the first target human body in the first video image and a second position coordinate of a target joint point of the second target human body in the second video image are acquired, then, the target time interval between the first video image and the second video image is calculated, and finally, a speed value of the target movement of the moving object between the first video image and the second video image is calculated by using a preset calculation formula according to the first position coordinate, the second position coordinate and the target time interval, wherein the first position coordinate is consistent with the joint point represented by the second position coordinate, and the joint point represented by the same side of the first position coordinate and the joint point represented by the second position coordinate are located on the target human body, namely: the first position coordinate is the position coordinate of the buttocks in the first video image, and the second position coordinate is the position coordinate of the buttocks in the second video image; and after determining the speed value of the target motion of the moving object between the first video image and the second video image, displaying the speed value.
It should be noted that the display mode of the speed values may be set according to actual needs, for example, the speed values of the target motion between the video images of every two adjacent frames in the video file may be displayed in a table form, or a graph may be created and displayed by using the frame number of each video image included in the video file as a numerical value on the abscissa and the speed value of the target motion as a numerical value on the ordinate, and the specific display mode is not specifically limited herein.
When the target movement is push-up, there are exemplified: when the target joint point includes a hip, the position coordinate of the left hip of the first target human body in the first video image is (x)h,yh) And the position coordinate of the left hip of the second target human body in the second video image is (x)h+1,yh+1) And the time interval between the first video image and the second video image is t, the velocity value V1 of the target motion can be calculated using the following formula:
Figure BDA0003217735150000151
when the target joint point includes a shoulder, the position coordinate of the left shoulder of the first target human body in the first video image is (x)s,yS) And the position coordinate of the left hip of the second target human body in the second video image is (x)s+1,ys+1) And the time interval between the first video image and the second video image is t, the velocity value V2 of the target motion can be calculated using the following formula:
Figure BDA0003217735150000161
when the target joint point includes the hip and the shoulder, the velocity value of the target motion may be an average of the velocity value V1 and the velocity value V2 based on the above calculation manner.
In the above example, the position coordinates of the left hip may be replaced with the position coordinates of the right hip or an average value of the position coordinates of the left hip and the right hip; the position coordinates of the left shoulder may be replaced with the position coordinates of the right shoulder or an average of the position coordinates of the left shoulder and the right shoulder.
In a possible embodiment, before performing step S102, the motion gesture recognition method further includes: and carrying out network pruning on the feature extraction model.
Specifically, in order to improve the feature extraction speed of the feature extraction model while ensuring stable accuracy of the feature extraction model, it is necessary to reduce the volume and the amount of calculation of the feature extraction model by network pruning, that is, to: and carrying out network pruning on the feature extraction model.
It should be noted that, in the present application, network pruning is performed through the introduced scale factor and BN (Batch Normalization) layer to perform channel pruning in the feature extraction model.
In a possible implementation, after performing step S104, the motion gesture recognition method further includes: and adding a label of a standard motion posture to the target human body area.
Specifically, after the target human body area in the video image is displayed on the human-computer interaction interface, in order to facilitate understanding of a user, a label is added to the target human body area, and the label is used for indicating that the motion posture of the target human body in the target human body area is a standard motion posture.
It should be noted that, the content of the tag may be set according to actual situations, for example, the tag content may include a "standard motion gesture", and may also include a frame number of a video image to which the standard motion gesture is currently added, and the specific tag content is not specifically limited herein.
It should be noted that the display mode of the label may be set according to actual situations, for example, the label may be displayed in the form of a watermark of the target human body region, or may be marked below the target human body region in the form of an annotation, and the specific display mode is not specifically limited herein.
Example two
Fig. 4 is a schematic structural diagram of a motion gesture recognition apparatus according to a second embodiment of the present application, and as shown in fig. 4, the motion gesture recognition apparatus includes:
a determining module 401, configured to determine, for each frame of video image in a video file, a target human body region that is located in a central region of the video image and has a largest area in at least one human body region included in the video image, where the video file is a video in which target motion is recorded, and the human body region is a region including a single human body;
an input module 402, configured to input the target human body region in the video image into a preset feature extraction model, to obtain position coordinates of each joint point of the target human body in the target human body region, where each of the position coordinates is located in a coordinate system using a specified position in the video image as a coordinate origin;
a judging module 403, configured to judge whether a geometric relationship between position coordinates of each joint point meets a preset relationship condition of a standard motion posture;
a first display module 404, configured to display the target human body region in the video image if the preset relationship condition is met.
In a possible embodiment, when the target motion is push-up, the joint points of the target human body include: buttock, shoulder, ankle, wrist, elbow and knee joint, the geometric relation satisfies predetermine the relation condition, include:
the geometric relationship satisfies at least one of the following conditions:
a difference value between a vertical coordinate value of the hip and a target value is within a first preset error range, wherein the target value is equal to an average value of a vertical coordinate value of the shoulder and a vertical coordinate value of the ankle, and the hip, the shoulder and the ankle are located on the same side of the target human body;
the difference value between the bending angle of the arm of the target human body and the first preset angle is within a second preset error range, wherein the bending angle of the arm is obtained through calculation according to the position coordinate of the wrist, the position coordinate of the elbow and the position coordinate of the shoulder, and the arm, the wrist, the elbow and the shoulder are located on the same side of the target human body;
the difference value between the bending angle of the leg of the target human body and the second preset angle is within a third preset error range, wherein the bending angle of the leg is calculated according to the position coordinate of the hip, the position coordinate of the knee joint and the position coordinate of the ankle, and the leg, the hip, the knee joint and the ankle are located on the same side of the target human body.
In a possible embodiment, the motion gesture recognition apparatus further includes:
the device comprises an acquisition module, a display module and a control module, wherein the acquisition module is used for acquiring a first position coordinate of a target joint point of a first target human body in a first video image and a second position coordinate of a target joint point of a second target human body in a second video image aiming at a first video image and a second video image of every two adjacent frames, and the target joint point comprises a hip and/or a shoulder when the target motion is push-up;
a calculating module, configured to calculate a speed value of the target motion according to the first position coordinate, the second position coordinate, and a target time interval using a preset calculation formula, where the target time interval is a time interval between the first video image and the second video image;
and the second display module is used for displaying the speed value.
In a possible embodiment, the input module 402 is configured to, before inputting the target human body region in the video image into a preset feature extraction model, further: and carrying out network pruning on the feature extraction model.
In a possible embodiment, the first display module 404 is configured to, after being configured to display the target human body region in the video image, further: and adding a label of a standard motion posture to the target human body area.
The apparatus provided in the embodiments of the present application may be specific hardware on a device, or software or firmware installed on a device, etc. The device provided by the embodiment of the present application has the same implementation principle and technical effect as the foregoing method embodiments, and for the sake of brief description, reference may be made to the corresponding contents in the foregoing method embodiments where no part of the device embodiments is mentioned. It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the foregoing systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The motion gesture recognition method provided by the embodiment of the application uses the video file for recording the target motion of the sporter to detect the motion gesture so as not to influence the normal motion of the sporter, in the process of recording the sports personnel, other irrelevant personnel may be recorded, each personnel is equivalent to a human body, one human body occupies one human body area, so that, for each frame of video image comprised by the video file, the video image comprises at least one body region, in order to improve the detection efficiency of the motion posture, the method only needs to input the human body area where the motion personnel is located (namely, the target human body area) into the feature extraction model, therefore, the target human body region needs to be determined in at least one human body region included in the video image based on the characteristics of the region where the moving person is located (namely, the region is located in the center of the video image and has the largest area); after the target human body area is input into the feature extraction model, the position coordinates of each joint point of the target human body (the moving person) in the target human body area output by the feature extraction model can be obtained, in order to judge whether the movement posture of the moving person is standard, a preset relation condition set according to the standard movement posture of the target movement needs to be used, whether the obtained geometric relation between the position coordinates of each joint point of the moving person meets the preset relation condition is judged, if the preset relation condition is met, the movement posture of the moving person is standard, and therefore the target human body area in the video image is displayed for the moving person to refer to.
In the process, the position coordinates (namely the movement posture) of each joint point of the sportsman in the movement process are detected by using the video file recorded with the target movement of the sportsman, so that the sportsman can wear comfortable sports clothes to move, and the normal movement process of the sportsman is prevented from being influenced; in addition, the preset relation condition set for the standard motion posture is used as the judgment condition, so that the server can judge whether the motion posture of the motion personnel is standard or not according to the geometric relation and the preset relation condition between the position coordinates of all the joint points of the motion personnel, the process does not need manual participation, the manual workload is favorably reduced, and the recognition efficiency of the motion posture is improved.
EXAMPLE III
Fig. 5 is a schematic structural diagram of a computer device provided in a third embodiment of the present application, and as shown in fig. 5, the device includes a memory 501, a processor 502, and a computer program stored in the memory 501 and executable on the processor 502, where the processor 502 implements the motion gesture recognition method when executing the computer program.
Specifically, the memory 501 and the processor 502 can be general memories and processors, which are not limited in particular, and when the processor 502 runs a computer program stored in the memory 501, the motion gesture recognition method can be executed, so that the problems that wearing of professional equipment affects normal motion of a sportsman and the workload of manual work is large in the prior art are solved.
Example four
The embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the motion gesture recognition method are performed.
Specifically, the storage medium can be a general storage medium, such as a mobile disk, a hard disk, and the like, and when a computer program on the storage medium is executed, the motion gesture recognition method can be executed, so that the problems that wearing professional equipment affects normal motion of a sportsman and the workload is large in the prior art are solved.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments provided in the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus once an item is defined in one figure, it need not be further defined and explained in subsequent figures, and moreover, the terms "first", "second", "third", etc. are used merely to distinguish one description from another and are not to be construed as indicating or implying relative importance.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present application, and are used for illustrating the technical solutions of the present application, but not limiting the same, and the scope of the present application is not limited thereto, and although the present application is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope disclosed in the present application; such modifications, changes or substitutions do not depart from the spirit and scope of the present disclosure, which should be construed in light of the above teachings. Are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A motion gesture recognition method, comprising:
aiming at each frame of video image in a video file, determining a target human body area which is located in the central area of the video image and has the largest area in at least one human body area included in the video image, wherein the video file is a video recorded with target motion, and the human body area is an area containing a single human body;
inputting the target human body area in the video image into a preset feature extraction model to obtain position coordinates of each joint point of a target human body in the target human body area, wherein each position coordinate is located in a coordinate system taking a designated position in the video image as a coordinate origin;
judging whether the geometric relation between the position coordinates of the joint points meets the preset relation condition of the standard motion posture or not;
and if the preset relation condition is met, displaying the target human body area in the video image.
2. The method of claim 1, wherein when the target motion is a push-up, the target human body's joint points comprise: buttock, shoulder, ankle, wrist, elbow and knee joint, the geometric relation satisfies predetermine the relation condition, include:
the geometric relationship satisfies at least one of the following conditions:
a difference value between a vertical coordinate value of the hip and a target value is within a first preset error range, wherein the target value is equal to an average value of a vertical coordinate value of the shoulder and a vertical coordinate value of the ankle, and the hip, the shoulder and the ankle are located on the same side of the target human body;
the difference value between the bending angle of the arm of the target human body and the first preset angle is within a second preset error range, wherein the bending angle of the arm is obtained through calculation according to the position coordinate of the wrist, the position coordinate of the elbow and the position coordinate of the shoulder, and the arm, the wrist, the elbow and the shoulder are located on the same side of the target human body;
the difference value between the bending angle of the leg of the target human body and the second preset angle is within a third preset error range, wherein the bending angle of the leg is calculated according to the position coordinate of the hip, the position coordinate of the knee joint and the position coordinate of the ankle, and the leg, the hip, the knee joint and the ankle are located on the same side of the target human body.
3. The method of claim 1, wherein the method further comprises:
for a first video image and a second video image of every two adjacent frames, acquiring a first position coordinate of a target joint point of a first target human body in the first video image and acquiring a second position coordinate of a target joint point of a second target human body in the second video image, wherein when the target movement is push-up, the target joint point comprises a hip and/or a shoulder;
calculating a speed value of the target motion by using a preset calculation formula according to the first position coordinate, the second position coordinate and a target time interval, wherein the target time interval is a time interval between the first video image and the second video image;
and displaying the speed value.
4. The method of claim 1, wherein before said inputting said target human body region in said video image into a preset feature extraction model, said method further comprises:
and carrying out network pruning on the feature extraction model.
5. The method of claim 1, wherein after said displaying the target body region in the video image, the method further comprises:
and adding a label of a standard motion posture to the target human body area.
6. A motion gesture recognition apparatus, comprising:
the device comprises a determining module, a judging module and a judging module, wherein the determining module is used for determining a target human body area which is located in the central area of a video image and has the largest area in at least one human body area contained in the video image aiming at each frame of video image in a video file, the video file is a video recorded with target motion, and the human body area is an area containing a single human body;
the input module is used for inputting the target human body area in the video image into a preset feature extraction model to obtain position coordinates of each joint point of a target human body in the target human body area, wherein each position coordinate is positioned in a coordinate system taking a designated position in the video image as a coordinate origin;
the judging module is used for judging whether the geometric relation between the position coordinates of the joint points meets the preset relation condition of the standard motion posture or not;
and the first display module is used for displaying the target human body area in the video image if the preset relation condition is met.
7. The apparatus of claim 6, wherein the motion gesture recognition apparatus further comprises:
the device comprises an acquisition module, a display module and a control module, wherein the acquisition module is used for acquiring a first position coordinate of a target joint point of a first target human body in a first video image and a second position coordinate of a target joint point of a second target human body in a second video image aiming at a first video image and a second video image of every two adjacent frames, and the target joint point comprises a hip and/or a shoulder when the target motion is push-up;
a calculating module, configured to calculate a speed value of the target motion according to the first position coordinate, the second position coordinate, and a target time interval using a preset calculation formula, where the target time interval is a time interval between the first video image and the second video image;
and the second display module is used for displaying the speed value.
8. The apparatus of claim 6, wherein the input module, before being configured to input the target human body region in the video image into a preset feature extraction model, is further configured to: and carrying out network pruning on the feature extraction model.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the method of any of the preceding claims 1-5 are implemented when the computer program is executed by the processor.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of the claims 1-5.
CN202110948521.XA 2021-08-18 2021-08-18 Motion gesture recognition method, device, equipment and storage medium Pending CN113657278A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110948521.XA CN113657278A (en) 2021-08-18 2021-08-18 Motion gesture recognition method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110948521.XA CN113657278A (en) 2021-08-18 2021-08-18 Motion gesture recognition method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113657278A true CN113657278A (en) 2021-11-16

Family

ID=78480947

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110948521.XA Pending CN113657278A (en) 2021-08-18 2021-08-18 Motion gesture recognition method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113657278A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114191804A (en) * 2021-12-08 2022-03-18 上海影谱科技有限公司 Deep-learning-based method and device for judging whether deep-squatting posture is standard or not
CN114191803A (en) * 2021-12-08 2022-03-18 上海影谱科技有限公司 Method and device for judging whether flat plate supporting posture is standard or not based on deep learning
CN115129162A (en) * 2022-08-29 2022-09-30 上海英立视电子有限公司 Picture event driving method and system based on human body image change
CN115282559A (en) * 2022-06-29 2022-11-04 王凡 Sports physical training apparatus based on big data
CN115364472A (en) * 2022-08-29 2022-11-22 上海英立视电子有限公司 Target movement triggering method and system based on human body image comparison
CN117409485A (en) * 2023-12-15 2024-01-16 佛山科学技术学院 Gait recognition method and system based on posture estimation and definite learning

Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104658009A (en) * 2015-01-09 2015-05-27 北京环境特性研究所 Moving-target detection method based on video images
CN107220604A (en) * 2017-05-18 2017-09-29 清华大学深圳研究生院 A kind of fall detection method based on video
CN109063661A (en) * 2018-08-09 2018-12-21 上海弈知信息科技有限公司 Gait analysis method and device
CN109432753A (en) * 2018-09-26 2019-03-08 Oppo广东移动通信有限公司 Act antidote, device, storage medium and electronic equipment
CN109753891A (en) * 2018-12-19 2019-05-14 山东师范大学 Football player's orientation calibration method and system based on human body critical point detection
CN109815907A (en) * 2019-01-25 2019-05-28 深圳市象形字科技股份有限公司 A kind of sit-ups attitude detection and guidance method based on computer vision technique
CN110135246A (en) * 2019-04-03 2019-08-16 平安科技(深圳)有限公司 A kind of recognition methods and equipment of human action
CN110147717A (en) * 2019-04-03 2019-08-20 平安科技(深圳)有限公司 A kind of recognition methods and equipment of human action
CN110170159A (en) * 2019-06-27 2019-08-27 郭庆龙 A kind of human health's action movement monitoring system
CN110321754A (en) * 2018-03-28 2019-10-11 西安铭宇信息科技有限公司 A kind of human motion posture correcting method based on computer vision and system
CN110427900A (en) * 2019-08-07 2019-11-08 广东工业大学 A kind of method, apparatus and equipment of intelligent guidance body-building
CN110751100A (en) * 2019-10-22 2020-02-04 北京理工大学 Auxiliary training method and system for stadium
CN110991292A (en) * 2019-11-26 2020-04-10 爱菲力斯(深圳)科技有限公司 Action identification comparison method and system, computer storage medium and electronic device
CN111597879A (en) * 2020-04-03 2020-08-28 成都云盯科技有限公司 Gesture detection method, device and system based on monitoring video
CN111931701A (en) * 2020-09-11 2020-11-13 平安国际智慧城市科技股份有限公司 Gesture recognition method and device based on artificial intelligence, terminal and storage medium
CN112237730A (en) * 2019-07-17 2021-01-19 腾讯科技(深圳)有限公司 Body-building action correcting method and electronic equipment
CN112668531A (en) * 2021-01-05 2021-04-16 重庆大学 Motion posture correction method based on motion recognition
CN112798811A (en) * 2020-12-30 2021-05-14 杭州海康威视数字技术股份有限公司 Speed measurement method, device and equipment
CN113033369A (en) * 2021-03-18 2021-06-25 北京达佳互联信息技术有限公司 Motion capture method, motion capture device, electronic equipment and computer-readable storage medium
CN113255623A (en) * 2021-07-14 2021-08-13 北京壹体科技有限公司 System and method for intelligently identifying push-up action posture completion condition

Patent Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104658009A (en) * 2015-01-09 2015-05-27 北京环境特性研究所 Moving-target detection method based on video images
CN107220604A (en) * 2017-05-18 2017-09-29 清华大学深圳研究生院 A kind of fall detection method based on video
CN110321754A (en) * 2018-03-28 2019-10-11 西安铭宇信息科技有限公司 A kind of human motion posture correcting method based on computer vision and system
CN109063661A (en) * 2018-08-09 2018-12-21 上海弈知信息科技有限公司 Gait analysis method and device
CN109432753A (en) * 2018-09-26 2019-03-08 Oppo广东移动通信有限公司 Act antidote, device, storage medium and electronic equipment
CN109753891A (en) * 2018-12-19 2019-05-14 山东师范大学 Football player's orientation calibration method and system based on human body critical point detection
CN109815907A (en) * 2019-01-25 2019-05-28 深圳市象形字科技股份有限公司 A kind of sit-ups attitude detection and guidance method based on computer vision technique
CN110135246A (en) * 2019-04-03 2019-08-16 平安科技(深圳)有限公司 A kind of recognition methods and equipment of human action
CN110147717A (en) * 2019-04-03 2019-08-20 平安科技(深圳)有限公司 A kind of recognition methods and equipment of human action
CN110170159A (en) * 2019-06-27 2019-08-27 郭庆龙 A kind of human health's action movement monitoring system
CN112237730A (en) * 2019-07-17 2021-01-19 腾讯科技(深圳)有限公司 Body-building action correcting method and electronic equipment
CN110427900A (en) * 2019-08-07 2019-11-08 广东工业大学 A kind of method, apparatus and equipment of intelligent guidance body-building
CN110751100A (en) * 2019-10-22 2020-02-04 北京理工大学 Auxiliary training method and system for stadium
CN110991292A (en) * 2019-11-26 2020-04-10 爱菲力斯(深圳)科技有限公司 Action identification comparison method and system, computer storage medium and electronic device
CN111597879A (en) * 2020-04-03 2020-08-28 成都云盯科技有限公司 Gesture detection method, device and system based on monitoring video
CN111931701A (en) * 2020-09-11 2020-11-13 平安国际智慧城市科技股份有限公司 Gesture recognition method and device based on artificial intelligence, terminal and storage medium
CN112798811A (en) * 2020-12-30 2021-05-14 杭州海康威视数字技术股份有限公司 Speed measurement method, device and equipment
CN112668531A (en) * 2021-01-05 2021-04-16 重庆大学 Motion posture correction method based on motion recognition
CN113033369A (en) * 2021-03-18 2021-06-25 北京达佳互联信息技术有限公司 Motion capture method, motion capture device, electronic equipment and computer-readable storage medium
CN113255623A (en) * 2021-07-14 2021-08-13 北京壹体科技有限公司 System and method for intelligently identifying push-up action posture completion condition

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
BIN CHAO 等: "Research and realization of crouch start correction system based on human pose estimation", 《2020 2ND INTERNATIONAL CONFERENCE ON INFORMATION TECHNOLOGY AND COMPUTER APPLICATION (ITCA)》 *
HO-JUN PARK 等: "Imagery based Parametric Classification of Correct and Incorrect Motion for Push-up Counter Using OpenPose", 《2020 IEEE 16TH INTERNATIONAL CONFERENCE ON AUTOMATION SCIENCE AND ENGINEERING (CASE)》 *
LEI YANG 等: "Human Exercise Posture Analysis based on Pose Estimation", 《2021 IEEE 5TH ADVANCED INFORMATION TECHNOLOGY, ELECTRONIC AND AUTOMATION CONTROL CONFERENCE (IAEAC)》 *
胡雪奎 等: "基于人体姿态识别的运动辅助评测系统研究与应用", 《中国优秀博硕士学位论文全文数据库(硕士) 信息科技辑》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114191804A (en) * 2021-12-08 2022-03-18 上海影谱科技有限公司 Deep-learning-based method and device for judging whether deep-squatting posture is standard or not
CN114191803A (en) * 2021-12-08 2022-03-18 上海影谱科技有限公司 Method and device for judging whether flat plate supporting posture is standard or not based on deep learning
CN115282559A (en) * 2022-06-29 2022-11-04 王凡 Sports physical training apparatus based on big data
CN115129162A (en) * 2022-08-29 2022-09-30 上海英立视电子有限公司 Picture event driving method and system based on human body image change
CN115364472A (en) * 2022-08-29 2022-11-22 上海英立视电子有限公司 Target movement triggering method and system based on human body image comparison
CN117409485A (en) * 2023-12-15 2024-01-16 佛山科学技术学院 Gait recognition method and system based on posture estimation and definite learning
CN117409485B (en) * 2023-12-15 2024-04-30 佛山科学技术学院 Gait recognition method and system based on posture estimation and definite learning

Similar Documents

Publication Publication Date Title
CN113657278A (en) Motion gesture recognition method, device, equipment and storage medium
US12079998B2 (en) Identifying movements and generating prescriptive analytics using movement intelligence
Chaudhari et al. Yog-guru: Real-time yoga pose correction system using deep learning methods
CN108875533B (en) Face recognition method, device, system and computer storage medium
JP2014511530A5 (en)
CN115105056A (en) Method and system for recognizing user action
CN113111767A (en) Fall detection method based on deep learning 3D posture assessment
US11450148B2 (en) Movement monitoring system
CN108304831B (en) Method and device for monitoring wearing of safety helmet of worker
US20220222975A1 (en) Motion recognition method, non-transitory computer-readable recording medium and information processing apparatus
JP2016045884A (en) Pattern recognition device and pattern recognition method
CN110693500B (en) Balance ability exercise evaluation method, device, server and storage medium
WO2021233019A1 (en) Body composition detection method, electronic device and computer-readable storage medium
CN110490165B (en) Dynamic gesture tracking method based on convolutional neural network
CN114202797A (en) Behavior recognition method, behavior recognition device and storage medium
CN116580454A (en) Motion evaluation method and device based on target detection and human body posture estimation
Tanaka et al. Automatic edge error judgment in figure skating using 3d pose estimation from a monocular camera and imus
US20230419730A1 (en) Motion error detection from partial body view
CN115223240A (en) Motion real-time counting method and system based on dynamic time warping algorithm
KR20230043347A (en) Method for providing fitting service using 3D modeling avatar
KR20230043343A (en) System for virtual fitting service based on body size
WO2016135560A2 (en) Range of motion capture
WO2024111430A1 (en) Processing device, processing system, processed model construction method, and program
Peer et al. A computer vision based system for a rehabilitation of a human hand
Ding et al. Implementation of behavior recognition based on machine vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20211116

RJ01 Rejection of invention patent application after publication