CN113707271A - Fitness scheme generation method and system based on artificial intelligence and big data - Google Patents
Fitness scheme generation method and system based on artificial intelligence and big data Download PDFInfo
- Publication number
- CN113707271A CN113707271A CN202111260470.8A CN202111260470A CN113707271A CN 113707271 A CN113707271 A CN 113707271A CN 202111260470 A CN202111260470 A CN 202111260470A CN 113707271 A CN113707271 A CN 113707271A
- Authority
- CN
- China
- Prior art keywords
- fitness
- action
- building
- training data
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 46
- 238000013473 artificial intelligence Methods 0.000 title claims abstract description 28
- 230000009471 action Effects 0.000 claims abstract description 157
- 238000012549 training Methods 0.000 claims abstract description 146
- 230000000007 visual effect Effects 0.000 claims abstract description 74
- 238000005259 measurement Methods 0.000 claims abstract description 33
- 238000013528 artificial neural network Methods 0.000 claims abstract description 25
- 230000035945 sensitivity Effects 0.000 claims abstract description 25
- 230000000694 effects Effects 0.000 claims abstract description 17
- 230000033001 locomotion Effects 0.000 claims description 39
- 230000006870 function Effects 0.000 claims description 30
- 239000013598 vector Substances 0.000 claims description 25
- 238000004590 computer program Methods 0.000 claims description 3
- 238000001514 detection method Methods 0.000 claims description 3
- 230000008859 change Effects 0.000 description 8
- 238000004364 calculation method Methods 0.000 description 6
- 230000008569 process Effects 0.000 description 6
- 206010061274 Malocclusion Diseases 0.000 description 4
- 238000000605 extraction Methods 0.000 description 2
- 238000011478 gradient descent method Methods 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 210000001015 abdomen Anatomy 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 239000012467 final product Substances 0.000 description 1
- 238000007499 fusion processing Methods 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 210000004072 lung Anatomy 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000003205 muscle Anatomy 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H20/00—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
- G16H20/30—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to physical therapies or activities, e.g. physiotherapy, acupressure or exercising
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Biophysics (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Biomedical Technology (AREA)
- Computing Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computational Linguistics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Molecular Biology (AREA)
- Evolutionary Biology (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Physical Education & Sports Medicine (AREA)
- Epidemiology (AREA)
- Medical Informatics (AREA)
- Primary Health Care (AREA)
- Public Health (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to the technical field of intelligent fitness, in particular to a fitness scheme generation method and system based on artificial intelligence and big data. The method comprises the following steps: training the first neural network according to first training data corresponding to the fitness personnel and label data corresponding to the first training data; inputting second training data corresponding to the body-building personnel into the first neural network to obtain second class labels corresponding to the second training data and obtain body-building action visual sensitivities of various actions, so that the body-building action visual angle invariance measurement network is trained by utilizing the second training data and the body-building action visual sensitivities; and finally, obtaining directed graph data corresponding to different fitness purposes by utilizing the trained fitness action visual angle invariance measurement network so as to obtain corresponding fitness schemes of the fitness purposes. According to the method and the system, the body-building scheme is generated for the user according to the body-building scheme of the body-building personnel with the same body-building purpose as the user in the big data, and the body-building effect of the user can be improved.
Description
Technical Field
The invention relates to the technical field of intelligent fitness, in particular to a fitness scheme generation method and system based on artificial intelligence and big data.
Background
With the improvement of living standard and the call of people for fitness, more and more people begin to pay attention to physical health, and begin to walk into a gymnasium to participate in a plurality of fitness activities. The body-building is roughly divided into the body-building and the non-body-building, different body-building methods have different body-building effects on different parts of the body, for people who just enter the body-building field, because of insufficient body-building experience, the people can only carry out the exercise blindly and randomly, and the body-building scheme which is suitable for the body-building purpose of the people cannot be selected, and the body-building effect is poor.
Disclosure of Invention
In order to solve the problem of poor fitness effect of fitness personnel, the invention aims to provide a fitness scheme generation method and a fitness scheme generation system based on artificial intelligence and big data, and the adopted technical scheme is as follows:
in a first aspect, an embodiment of the present invention provides a method for generating a fitness plan based on artificial intelligence and big data, including the following steps:
acquiring a gymnasium RGB image of continuous frames in a preset time; obtaining an image sequence corresponding to a first fitness worker according to the gymnasium RGB images of the continuous frames, and recording first training data;
training a first neural network according to first training data and label data corresponding to the first training data, wherein the label data corresponding to the first training data comprises a visual angle category, an action category and a human body key point;
acquiring an image sequence corresponding to a second fitness person and recording the image sequence as second training data, wherein first class label data corresponding to the second training data comprises a visual angle class, an action class and a human body key point; classifying second training data with the same action type into a training set corresponding to the action type, and inputting the second training data in the training set into a trained first neural network to obtain a second type label corresponding to the second training data, wherein the second type label comprises an action classification probability vector and a view classification probability vector output by the first neural network;
obtaining body-building action visual sensitivities corresponding to various action types according to second type labels corresponding to the second training data;
training a body-building action visual angle invariance measurement network according to second training data and a second class label, a first class label and a body-building action visual sensitivity corresponding to the second training data, wherein the body-building action visual angle invariance measurement network is used for classifying body-building actions of people;
classifying the body-building actions of each body-building person in the large database by using the trained body-building action visual angle invariance measurement network to obtain action classification results corresponding to each body-building person; obtaining directed graph data corresponding to different fitness purposes according to the action classification result corresponding to each fitness worker;
and matching the directed graph data corresponding to the fitness purpose according to the fitness purpose of the user to obtain a corresponding fitness scheme.
In a second aspect, another embodiment of the present invention provides an artificial intelligence and big data based exercise program generating system, which includes a memory and a processor, wherein the processor executes a computer program stored in the memory to implement the artificial intelligence and big data based exercise program generating method described above.
Preferably, the obtaining of the directed graph data corresponding to different fitness purposes according to the motion classification result corresponding to each fitness worker includes:
according to the action classification result corresponding to each body-building person, taking each body-building action of the body-building person as a node, and taking the duration time of the body-building action as a signal value of the node corresponding to the body-building action;
when the body-building action of the body-building personnel changes, the nodes are connected according to the changing sequence to obtain directed graph data corresponding to the body-building personnel;
fusing directed graphs corresponding to the same fitness personnel with the fitness purpose to obtain directed graph data corresponding to the fitness purpose;
the signal value of each node in the directed graph data corresponding to the fitness purpose is the average value of the signal values of the same node in the directed graph data corresponding to the same fitness personnel with the same fitness purpose, and the weight of the edge is the probability of the edge appearing in the directed graph data corresponding to the same fitness personnel with the same fitness purpose.
Preferably, the matching, according to the fitness objective of the user, the directed graph data corresponding to the fitness objective to obtain a corresponding fitness scheme includes:
determining an initial node of a fitness scheme according to the probability of the first occurrence of each node in the directed graph data corresponding to the fitness purpose;
taking the initial node as a starting point, and performing wandering along the edge with the maximum weight to obtain a plurality of wandering paths;
and selecting the walking path with the maximum sum of the side weights as an optimal walking path, and taking the optimal walking path as a corresponding fitness scheme of the fitness objective.
Preferably, the calculation formula of the visual sensitivity of the fitness action is as follows:
wherein,is as followsThe fitness movement visual sensitivity of each movement category,is as followsThe motion classification probability vector of each training data,is as followsThe perspective classification probability vectors for individual training data,is as followsA set of training data pairs with different visual angles in the training set corresponding to each action category,is composed ofThe number of training data pairs in the set.
Preferably, the visual angle loss function adopted by the fitness action visual angle invariance measurement network is as follows:
wherein,to be the viewing angle loss function value,the second training data for an arbitrary action is,is prepared by reacting withIs the same as the positive sample of the action category,is prepared by reacting withAre different negative examples of the action category of (c),in order to be a discrimination threshold value, the discrimination threshold value,is composed ofAndthe L2 distance between them,is composed ofAndl2 distance in between.
Preferably, the classification loss function adopted by the fitness action view invariance measurement network is as follows:
wherein,in order to classify the values of the loss functions,the actual view angle category output by the network is measured for the body-building action view angle invariance,for the perspective class in the original label corresponding to the second training data,classifying the probability vector for the perspective in the second class label corresponding to the second training data,is good forThe perspective invariance of body movement measures the actual movement category output by the network,for the action category in the original label corresponding to the second training data,classifying probability vectors for actions in the second class labels corresponding to the second training data,is composed ofThe divergence of the light beam is measured by the light source,in order to be a function of the cross-entropy loss,in order to be a function of the cross-entropy loss,is as followsFitness movement visual sensitivity for each movement category.
Preferably, the method for obtaining the image sequence corresponding to the first fitness person comprises:
carrying out target detection on the RGB images of the continuous intraframe gymnasium to obtain an enclosure frame corresponding to the first gymnastic person in each image;
and cutting the RGB images of the gymnasium in the continuous frame according to the surrounding frame corresponding to each person in each image to obtain an RGB image sequence of the gymnasium in the continuous frame.
The embodiment of the invention has the following beneficial effects:
the method trains the body-building action visual angle invariance measurement network by acquiring the training data of the first body-building person and the second body-building person, and the trained body-building action visual angle invariance measurement network can be used for classifying the body-building action of the body-building persons; classifying the body-building actions of each body-building person in the large database by using the trained body-building action visual angle invariance measurement network, so as to obtain action classification results corresponding to each body-building person; the directed graph data corresponding to different fitness purposes can be obtained by combining the fitness purposes of each fitness worker; therefore, the directed graph data suitable for the user can be matched according to the fitness purpose of the user, and the fitness scheme suitable for the user is further obtained. The invention generates the body-building scheme for the user according to the body-building scheme of the body-building personnel with the same body-building purpose as the user in the big data, and can improve the body-building effect of the user.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions and advantages of the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a flowchart of a method for generating a fitness plan based on artificial intelligence and big data according to an embodiment of the present invention.
Detailed Description
In order to further explain the technical means and functional effects of the present invention adopted to achieve the predetermined invention purpose, the following describes in detail a method and system for generating a fitness plan based on artificial intelligence and big data according to the present invention with reference to the accompanying drawings and preferred embodiments.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The following describes a specific scheme of a fitness scheme generation method and system based on artificial intelligence and big data in detail with reference to the accompanying drawings.
The embodiment of the body-building scheme generation method based on artificial intelligence and big data comprises the following steps:
as shown in fig. 1, the method for generating a fitness program based on artificial intelligence and big data of the present embodiment includes the following steps:
step S1, acquiring gymnasium RGB images of continuous frames within preset time; and obtaining an image sequence corresponding to the first fitness personnel according to the gymnasium RGB images of the continuous frames, and recording first training data.
In order to obtain images required in the subsequent network training, in the embodiment, the camera deployed in the gymnasium is used for acquiring the RGB images of the gymnasium, wherein the RGB images of the gymnasium comprise the gymnastic processes of each gymnasium; the specific acquisition process is as follows:
the method comprises the steps of collecting continuous frames of the RGB images of the gymnasium through a camera in the gymnasium, inputting each collected frame of RGB image into a target detection network to obtain a surrounding frame of each gymnasium person in each frame of image, and cutting the RGB images of the gymnasium of the corresponding frame by using the surrounding frame of each gymnasium person to obtain the RGB images of each gymnasium person in the current frame of image. And processing the acquired RGB images of each gymnasium frame according to the same method to obtain the RGB images of each gymnasium person in the RGB images of all the gymnasiums.
Since the position of the fitness personnel will not change greatly in the short time during the exercise of the gymnasium, the embodiment determines the bounding boxes belonging to the same person based on the IOU (cross-over ratio) of the bounding boxes of the fitness personnel in the adjacent frame images, and obtains the image sequence of the exercise motions of each fitness personnel in the continuous frame according to the matching process, wherein the size of the image sequence isWhereinFor each body-building person separately, the height sum of RGB imagesThe width of the paper is wide,is the time length of the image sequence. The present embodiment considers that since one fitness activity has a short duration, in order to ensure that the fitness activities corresponding to one image sequence are the same fitness activity type, setting is made accordinglyAnd the setting can be carried out according to the requirement in the actual process.
In this embodiment, a gymnastic person identified based on the RGB images of the gymnasium is recorded as a first gymnastic person, and an image sequence corresponding to the first gymnastic person is recorded as first training data; the first exerciser is not one exerciser, but includes a plurality of exercisers.
Step S2, training a first neural network according to first training data and label data corresponding to the first training data, wherein the label data corresponding to the first training data comprises a visual angle category, an action category and a human body key point.
In order to classify the body-building actions of each body-building person, the embodiment constructs a body-building action visual angle invariance measurement network, and because the constructed network needs a large amount of parameters, the embodiment firstly constructs a first neural network, and the first neural network is used as a teacher network to provide dark knowledge to supervise the training of the body-building action visual angle invariance measurement network, so that the obtained body-building action visual angle invariance measurement network has small parameter amount and high precision, can be embedded into a camera, and improves the efficiency of calculation and data acquisition.
In this embodiment, step S2 is divided into the following two sub-steps:
and step S2-1, label data corresponding to the first training data is obtained.
In this embodiment, each first training data is used to train a first neural network, which is a network with sufficient training and high precision, and the first neural network cannot be embedded in a mobile device such as a camera or a mobile phone because of its large parameter. Before training the first neural network, the obtained first training data is artificially labeled in the present embodiment, and the label data in the present embodiment are respectively a view category label, an action category label and label information of 18 human key points in the CPN network. The 18 key points of the human body have been explained in the prior art, and are not described in detail here.
In this embodiment, the view category of each first training data is a view category of a first frame image in an image sequence, and in this embodiment, the view categories are divided into 8 categories, which are: front view, back view, left view, right view, left front view, right front view, left back view; the motion category of the first training data comprises all common fitness motions (such as sit-up, push-up, etc.). Each first training data corresponds to an action class label and a perspective class label.
And step S2-2, training the first neural network according to the first training data and the label data corresponding to the first training data.
In this embodiment, each first training data and the label data corresponding to each first training data are used to train a first neural network, where the first neural network is composed of two branches, which specifically is:
the first branch is used to obtain the action category corresponding to the image sequence, and the branch isStructure, the present embodiment inputs the first training data to the encoderExtracting features, and sending the extracted features to decoderThe obtained human skeleton information is sent to a classifierGet the action category in this factIn the examplesA CPN network may be employed.
The second branch is used to obtain the action view angle category corresponding to the image sequence, and the branch isThe structure, in this embodiment the first branch and the second branch are executed in parallel, willExtracted features are fed intoZhonglai pairThe extracted features are further extracted and then the further extracted features are sent to a classifierIn (5), obtaining an action view angle category.
The loss function of the first neural network is:
wherein,is composed ofA loss function of the network;for a classifierIs the cross-entropy loss function of (a),for a classifierFor supervising the accuracy of the classification of the first neural network. The present embodiment continuously updates the network parameters by using a gradient descent method.
Step S3, acquiring an image sequence corresponding to a second fitness person and recording the image sequence as second training data, wherein first class label data corresponding to the second training data comprise a visual angle class, an action class and a human body key point; and classifying second training data with the same action type into a training set corresponding to the action type, and inputting the second training data in the training set into a trained first neural network to obtain a second type label corresponding to the second training data, wherein the second type label comprises an action classification probability vector and a view classification probability vector output by the first neural network.
After the trained first neural network is obtained, the embodiment obtains a second category label for second training data corresponding to a second fitness person by using the trained first neural network, specifically:
first, second training data corresponding to a second fitness person is obtained through a method similar to that in step S1, where the second fitness person in this embodiment is a plurality of fitness persons different from the first fitness person, the first type of label data corresponding to the second training data includes a visual angle type, an action type, and a human body key point, and the first type of label data is obtained in the same manner as the label data of the first training data.
Then, classifying the second training data, wherein the purpose of classification is to calculate the perspective sensitivity of each action category, and specifically: and classifying the second training data with the same action type into a set according to the action type label corresponding to each second training data, namely a training set corresponding to one action type.
And finally, sending second training data in the training set corresponding to each action into the first neural network to obtain a second class label corresponding to each second training data, wherein the second class label comprises: the motion classification probability vector is the probability that the corresponding second training data is of each motion category, and the sum of all numerical values in the vector is 1; and (3) a visual angle classification probability vector, namely the probability that the corresponding second training data is of each visual angle class, wherein the sum of numerical values in the vector is 1.
Step S4, obtaining the fitness movement visual sensitivities corresponding to the various movement types according to the second type labels corresponding to the second training data.
In the embodiment, the actions performed by the fitness personnel in the fitness exercise are often bilaterally symmetrical, such as left-side bow-step stretching and right-side bow-step stretching, and the like, and when the action types are judged through pictures, the change of the visual angle has a great influence on the accurate identification of the actions, namely the actions are sensitive to the change of the visual angle; however, for some body-building actions such as push-up, the change of the viewing angle has little influence on the judgment of the action type, so in order to ensure the accuracy of the judgment of the action type, the embodiment acquires the body-building action viewing angle sensitivities corresponding to different body-building actions.
Specifically, obtaining theThe classification result of each second training data in the training set corresponding to each action category is to make two training data with different visual angles in the training set as a training data pair, a set Q is formed by a plurality of training data pairs, in this embodiment, the exclusive or operation is performed on the visual angle classification probability vectors corresponding to the two second training data of each training data pair in the set Q, the greater the number of training data pairs subjected to the exclusive or operation, the greater the visual sensitivity of the exercise action of the action category, and the smaller the number of training data pairs 1, the smaller the visual sensitivity of the exercise action of the action category, therefore, the visual sensitivity of the exercise action reflects the probability that the same action category is classified incorrectly at different visual angles, and the larger the visual sensitivity value of the exercise action indicates that the current action is wrong at different visual anglesThe more sensitive the class is to changes in viewing angle. Then it is firstThe calculation formula of the fitness movement visual sensitivity of each movement category is as follows:
wherein,is as followsThe fitness movement visual sensitivity of each movement category,is as followsThe motion classification probability vector of each training data,is as followsThe perspective classification probability vectors for individual training data,is as followsA set of training data pairs with different visual angles in the training set corresponding to each action category,is composed ofThe number of training data pairs in the set,for an exclusive-OR operation, firstTraining data andthe training data is a training data pair in the set Q,andis embodied inTraining data andand the action type corresponding to each training data.
And step S5, training a body-building action visual angle invariance measurement network according to the second training data and the second class label, the first class label and the body-building action visual sensitivity corresponding to the second training data, wherein the body-building action visual angle invariance measurement network is used for classifying the body-building action of the body-building personnel.
In this embodiment, step S3 uses the action classification probability vector and the perspective class probability vector of each second training data as the second class label of the corresponding training data. Compared with the first class label, the second class label tends to be smooth, the distribution entropy is larger, and the larger the distribution entropy is, the more the similarity between different action classes can be reflected, so that more supervision information is provided.
In this embodiment, the second training data and the second category label, the first category label and the visual sensitivity of the exercise movement corresponding to the second training data are used to apply to the exercise movementAnd training the view angle invariance measurement network. Wherein the network for measuring the invariance of the angle of view of the body-building action isIn the structure, because the second class label obtained by the first neural network contains more supervision information, the encoder in the body-building action view angle invariance measurement networkAnd an encoderOnly a small number of parameters are needed to meet the classification requirement.
In the training process, each second training data is firstly input into the encoderThe image sequence in (1) is subjected to feature extraction, and the obtained features are input into a classifierObtaining a visual angle classification result, and adding the visual angle classification result toIs characterized by the output ofFeeding into the encoder after operationPerforming further feature extraction to obtain the final productAnd finally will beInput to a classifierTo obtain the final action classification result.
In this embodiment, the following componentsThe second training data is used as a batch, and one training data is constructed for each batchAnd each second training data gets a corresponding one when training the networkCalculating the correspondence of pairwise training data in each batchL2 distance between, and will result inWriting distance values to buildIs denoted as a distance metric matrix. The distance measurement matrix is used for constructing a loss function of the fitness action view angle invariance measurement network.
In this embodiment, the loss function of the body-building action view invariance measurement network is divided into two parts, namely a view angle loss function and a classification loss function, which specifically include:
first, a visual angle loss function is set for reducing the image sequence of the same exercise motion category under different visual anglesWhile enlarging the sequence of images of different types of exercisesTo enhance the distinguishability between features. The method specifically comprises the following steps: in this embodiment, according toObtaining a plurality of triples including the second training data of an arbitrary motion by using the fitness motion category label corresponding to each second training data in the second image sequenceIs recorded as a base sample, andpositive samples of the same action typeAnd withNegative examples of different action categories. The view loss function is calculated as:
wherein,to be the viewing angle loss function value,the second training data for an arbitrary action is,is prepared by reacting withIs the same as the positive sample of the action category,is prepared by reacting withAre different negative examples of the action category of (c),in order to be a discrimination threshold value, the discrimination threshold value,is composed ofAndthe L2 distance between them,is composed ofAndl2 distance in between. The discrimination threshold valueIs a hyper-parameter for distinguishing the differences between image sequences of different motion classes, the size of which is related to the motion class of the sample and the view angle relationship between the samples. Therefore, whether the viewing angle is the same or not, it is desirable to make the viewing angle as large as possibleTherefore, the discrimination between different categories is ensured to be larger.
In this exampleThe determination method of the numerical value comprises the following steps: according toCorresponding labels, obtaining corresponding visual angle category labels, and obtaining the visual angle relation index of the triple according to the visual angle category labels corresponding to the samplesWhen the view category labels of the triples are consistent(ii) a Take a value when the view categories are inconsistent. ThenThe calculation formula of (2) is as follows:
wherein,the degree of the division is taken as a reference area,is as followsThe sensitivity of each action category is higher, which indicates the firstThe greater the effect of the individual movements on the change of viewing angle, i.e. at different viewing anglesWill also become larger, so that the discrimination threshold will be made larger to ensure the firstThe classification effect of individual action classes with other actions,an adjustment value for the discrimination threshold for the perspective relationship.
Loss of view functionThe method and the device ensure the distinguishability among the body-building action class characteristics and also ensure the distinguishability among the extracted characteristics under different visual angles.
Secondly, a classification loss function is set, the classification loss function ensures the classification accuracy, and the classification loss function utilizes a first class label and a second class label corresponding to second training data to jointly supervise the body-building action visual angle invariance measurement network, so that the difference between the output result of the body-building action visual angle invariance measurement network and the first class label and the second class label is minimum, and a better network is obtained. The calculation formula of the classification loss function is as follows:
wherein,in order to classify the values of the loss functions,the actual view angle category output by the network is measured for the body-building action view angle invariance,for the perspective class in the original label corresponding to the second training data,classifying the probability vector for the perspective in the second class label corresponding to the second training data,the actual motion category output by the network is measured for the body-building motion view invariance,for the action category in the original label corresponding to the second training data,classifying probability vectors for actions in the second class labels corresponding to the second training data,is composed ofThe divergence of the light beam is measured by the light source,in order to be a function of the cross-entropy loss,in order to be a function of the cross-entropy loss,is as followsThe fitness movement visual sensitivity of each movement category,the larger the size should beThe more the corresponding fitness action is assignedThe attention is high, so that the influence of the change of the view angle on the judgment of the action type is reduced.
In this embodiment, the final loss function of the body-building action view angle invariance measurement network isAnd continuously and iteratively updating the parameters of the body-building action visual angle invariance measurement network by using a gradient descent method. The trained fitness action visual angle invariance measurement network can be embedded into equipment such as a camera due to small parameter quantity and high calculation efficiency.
Step S6, classifying the body-building actions of each body-building person in the big database by using the trained body-building action visual angle invariance measurement network to obtain action classification results corresponding to each body-building person; and obtaining directed graph data corresponding to different fitness purposes according to the action classification result corresponding to each fitness person.
In order to generate a fitness scheme capable of achieving the corresponding fitness purpose for users with different fitness purposes, the implementation collects RGB image sequences of each fitness person in each gym in real time during the exercise period through historical data in a large database and sends the RGB image sequences into a trained fitness action view angle invariance measurement network to obtain accurate action classification results corresponding to the image sequences. The embodiment determines the identity information of each fitness person based on the face recognition technology to obtain the fitness purposes of each fitness person, such as fat reduction, muscle increase and the like; in order to make the generated fitness scheme more reliable, in this embodiment, only the RGB image sequence of the professional fitness person or the experienced fitness person is selected as the data to be referred to for obtaining the directed graph data corresponding to different fitness purposes according to the identity information of the fitness person.
In this embodiment, the digraph data corresponding to each fitness person is obtained according to the obtained action classification result of each fitness person, and a process of generating the digraph data for the fitness person is as follows:
in this embodiment, each fitness activity of the fitness personnel is represented by a node, and the duration of the fitness activity is used as a node signal value. And when the body-building action of the target body-building person is changed, connecting the nodes before and after the change according to the change sequence until the movement of the target body-building person is finished, and acquiring directed graph data corresponding to each body-building person according to the method.
In the embodiment, a large amount of directed graph data corresponding to fitness personnel are collected, the directed graph data with the same fitness purpose are fused, and the mean value of the signal values of a plurality of same nodes is taken as the signal value of the fused node in the fusion process; the edge weight value in the directed graph data is updated according to the occurrence frequency of the directed edge in the directed graph data with the same fitness purpose, and the method specifically comprises the following steps: the quantity of the directed graph data in the embodiment isNode ofPointing nodeThe number of edges present isThen there is an edgeThe edge weight value ofI.e. the probability of the directed edge appearing in each directed graph; and obtaining the probability of the first occurrence of each node in the corresponding directed graph of the fitness objective according to the first occurrence of each node in each directed graph data.
And generating corresponding directed graph data for different fitness purposes according to the method, wherein the information contained in each node in the directed graph data comprises the probability of the first occurrence and the signal value. The fitness purpose in this embodiment can be for thin legs, practice the abdomen, promote heart and lung ability etc. and specific classification can set up according to actual conditions.
And step S7, matching the directed graph data corresponding to the fitness purpose according to the fitness purpose of the user to obtain a corresponding fitness scheme.
In this embodiment, step S6 obtains the directed graph data corresponding to different purposes, so in practical applications, the directed graph data corresponding to the exercise purpose of the user is matched according to the exercise purpose of the user, and then a corresponding exercise scheme is generated for the user according to the directed graph data for the exercise purpose, where the user may be an exerciser with insufficient exercise experience or an exerciser who wants to reformulate the exercise purpose. The method specifically comprises the following steps:
firstly, according to the probability of the first appearance of each node in the directed graph data corresponding to the fitness purpose, determining a determined initial node of the fitness scheme to obtain an initial fitness action, and then taking the initial node as a starting point to walk along the edge with the maximum edge weight, wherein the path with the maximum edge weight is taken as the optimal walking path, that is, the path with the maximum edge weight is obtained in the embodiment because the edges connected by each node are possibly multiple and the weights of the edges are possibly the same, the embodiment possibly obtains multiple walking pathsCorresponding pathAnd the optimal walking path is a final fitness scheme corresponding to the fitness purpose, and the signal value of each node in the optimal walking path is the recommended exercise time length of each fitness action.
In the embodiment, the body-building action visual angle invariance measurement network is trained by acquiring training data of a first body-building person and a second body-building person, and the trained body-building action visual angle invariance measurement network can be used for classifying body-building actions of the persons; classifying the body-building actions of each body-building person in the large database by using the trained body-building action visual angle invariance measurement network, so as to obtain action classification results corresponding to each body-building person; the directed graph data corresponding to different fitness purposes can be obtained by combining the fitness purposes of each fitness worker; therefore, the directed graph data suitable for the user can be matched according to the fitness purpose of the user, and the fitness scheme suitable for the user is further obtained. Most of the fitness personnel in the big database are experienced fitness personnel or professional personnel, and the fitness scheme is generated for the user according to the fitness scheme of the fitness personnel with the same fitness purpose as the user in the big database, so that the fitness effect of the user can be improved.
The embodiment of the fitness scheme generation system based on artificial intelligence and big data comprises the following steps:
the exercise scheme generation system based on artificial intelligence and big data comprises a memory and a processor, wherein the processor executes a computer program stored in the memory to realize the exercise scheme generation method based on artificial intelligence and big data.
Because the exercise scheme generation method based on artificial intelligence and big data has been described in the embodiment of the exercise scheme generation method based on artificial intelligence and big data, the embodiment does not give any further details on the exercise scheme generation method based on artificial intelligence and big data.
It should be noted that: the above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.
Claims (8)
1. A fitness scheme generation method based on artificial intelligence and big data is characterized by comprising the following steps:
acquiring a gymnasium RGB image of continuous frames in a preset time; obtaining an image sequence corresponding to a first fitness worker according to the gymnasium RGB images of the continuous frames, and recording first training data;
training a first neural network according to first training data and label data corresponding to the first training data, wherein the label data corresponding to the first training data comprises a visual angle category, an action category and a human body key point;
acquiring an image sequence corresponding to a second fitness person and recording the image sequence as second training data, wherein first class label data corresponding to the second training data comprises a visual angle class, an action class and a human body key point; classifying second training data with the same action type into a training set corresponding to the action type, and inputting the second training data in the training set into a trained first neural network to obtain a second type label corresponding to the second training data, wherein the second type label comprises an action classification probability vector and a view classification probability vector output by the first neural network;
obtaining body-building action visual sensitivities corresponding to various action types according to second type labels corresponding to the second training data;
training a body-building action visual angle invariance measurement network according to second training data and a second class label, a first class label and a body-building action visual sensitivity corresponding to the second training data, wherein the body-building action visual angle invariance measurement network is used for classifying body-building actions of people;
classifying the body-building actions of each body-building person in the large database by using the trained body-building action visual angle invariance measurement network to obtain action classification results corresponding to each body-building person; obtaining directed graph data corresponding to different fitness purposes according to the action classification result corresponding to each fitness worker;
and matching the directed graph data corresponding to the fitness purpose according to the fitness purpose of the user to obtain a corresponding fitness scheme.
2. The method for generating a fitness scheme based on artificial intelligence and big data according to claim 1, wherein the obtaining of the directed graph data corresponding to different fitness objectives according to the action classification result corresponding to each fitness worker comprises:
according to the action classification result corresponding to each body-building person, taking each body-building action of the body-building person as a node, and taking the duration time of the body-building action as a signal value of the node corresponding to the body-building action;
when the body-building action of the body-building personnel changes, the nodes are connected according to the changing sequence to obtain directed graph data corresponding to the body-building personnel;
fusing directed graphs corresponding to the same fitness personnel with the fitness purpose to obtain directed graph data corresponding to the fitness purpose;
the signal value of each node in the directed graph data corresponding to the fitness purpose is the average value of the signal values of the same node in the directed graph data corresponding to the same fitness personnel with the same fitness purpose, and the weight of the edge is the probability of the edge appearing in the directed graph data corresponding to the same fitness personnel with the same fitness purpose.
3. The method for generating a fitness scheme based on artificial intelligence and big data according to claim 2, wherein the step of matching the directed graph data corresponding to the fitness purpose according to the fitness purpose of the user to obtain the corresponding fitness scheme comprises the following steps:
determining an initial node of a fitness scheme according to the probability of the first occurrence of each node in the directed graph data corresponding to the fitness purpose;
taking the initial node as a starting point, and performing wandering along the edge with the maximum weight to obtain a plurality of wandering paths;
and selecting the walking path with the maximum sum of the side weights as an optimal walking path, and taking the optimal walking path as a corresponding fitness scheme of the fitness objective.
4. The artificial intelligence and big data based fitness scheme generating method of claim 1, wherein the visual sensitivity of the fitness activity is calculated by the formula:
wherein,is as followsThe fitness movement visual sensitivity of each movement category,is as followsThe motion classification probability vector of each training data,is as followsThe perspective classification probability vectors for individual training data,is as followsA set of training data pairs with different visual angles in the training set corresponding to each action category,is composed ofThe number of training data pairs in the set.
5. The artificial intelligence and big data based fitness scheme generating method of claim 1, wherein the fitness action perspective invariance measure network employs a perspective loss function:
wherein,to be the viewing angle loss function value,the second training data for an arbitrary action is,is prepared by reacting withIs the same as the positive sample of the action category,is prepared by reacting withAre different negative examples of the action category of (c),in order to be a discrimination threshold value, the discrimination threshold value,is composed ofAndthe L2 distance between them,is composed ofAndl2 distance in between.
6. The method for generating a fitness program based on artificial intelligence and big data according to claim 1, wherein the network for measuring the invariance of the angle of the fitness activities uses a classification loss function as follows:
wherein,in order to classify the values of the loss functions,the actual view angle category output by the network is measured for the body-building action view angle invariance,for the perspective class in the original label corresponding to the second training data,classifying the probability vector for the perspective in the second class label corresponding to the second training data,the actual motion category output by the network is measured for the body-building motion view invariance,for the action category in the original label corresponding to the second training data,classifying probability vectors for actions in the second class labels corresponding to the second training data,is composed ofThe divergence of the light beam is measured by the light source,in order to be a function of the cross-entropy loss,in order to be a function of the cross-entropy loss,is as followsFitness movement visual sensitivity for each movement category.
7. The method for generating a fitness program based on artificial intelligence and big data according to claim 1, wherein the method for obtaining the image sequence corresponding to the first fitness person comprises:
carrying out target detection on the RGB images of the continuous intraframe gymnasium to obtain an enclosure frame corresponding to the first gymnastic person in each image;
and cutting the RGB images of the gymnasium in the continuous frame according to the surrounding frame corresponding to each person in each image to obtain an RGB image sequence of the gymnasium in the continuous frame.
8. An artificial intelligence and big data based fitness program generation system comprising a memory and a processor, wherein the processor executes a computer program stored by the memory to implement the artificial intelligence and big data based fitness program generation method of any one of claims 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111260470.8A CN113707271B (en) | 2021-10-28 | 2021-10-28 | Fitness scheme generation method and system based on artificial intelligence and big data |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111260470.8A CN113707271B (en) | 2021-10-28 | 2021-10-28 | Fitness scheme generation method and system based on artificial intelligence and big data |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113707271A true CN113707271A (en) | 2021-11-26 |
CN113707271B CN113707271B (en) | 2022-02-25 |
Family
ID=78647295
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111260470.8A Active CN113707271B (en) | 2021-10-28 | 2021-10-28 | Fitness scheme generation method and system based on artificial intelligence and big data |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113707271B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114242204A (en) * | 2021-12-24 | 2022-03-25 | 珠海格力电器股份有限公司 | Motion strategy determination method and device |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106295139A (en) * | 2016-07-29 | 2017-01-04 | 姹ゅ钩 | A kind of tongue body autodiagnosis health cloud service system based on degree of depth convolutional neural networks |
CN108984618A (en) * | 2018-06-13 | 2018-12-11 | 深圳市商汤科技有限公司 | Data processing method and device, electronic equipment and computer readable storage medium |
CN110727718A (en) * | 2019-10-14 | 2020-01-24 | 成都乐动信息技术有限公司 | Intelligent generation method and system for fitness course |
CN111383735A (en) * | 2020-03-24 | 2020-07-07 | 杭州大数云智科技有限公司 | Unmanned body-building analysis method based on artificial intelligence |
CN112233770A (en) * | 2020-10-15 | 2021-01-15 | 郑州师范学院 | Intelligent gymnasium management decision-making system based on visual perception |
CN112418200A (en) * | 2021-01-25 | 2021-02-26 | 成都点泽智能科技有限公司 | Object detection method and device based on thermal imaging and server |
-
2021
- 2021-10-28 CN CN202111260470.8A patent/CN113707271B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106295139A (en) * | 2016-07-29 | 2017-01-04 | 姹ゅ钩 | A kind of tongue body autodiagnosis health cloud service system based on degree of depth convolutional neural networks |
CN108984618A (en) * | 2018-06-13 | 2018-12-11 | 深圳市商汤科技有限公司 | Data processing method and device, electronic equipment and computer readable storage medium |
CN110727718A (en) * | 2019-10-14 | 2020-01-24 | 成都乐动信息技术有限公司 | Intelligent generation method and system for fitness course |
CN111383735A (en) * | 2020-03-24 | 2020-07-07 | 杭州大数云智科技有限公司 | Unmanned body-building analysis method based on artificial intelligence |
CN112233770A (en) * | 2020-10-15 | 2021-01-15 | 郑州师范学院 | Intelligent gymnasium management decision-making system based on visual perception |
CN112418200A (en) * | 2021-01-25 | 2021-02-26 | 成都点泽智能科技有限公司 | Object detection method and device based on thermal imaging and server |
Non-Patent Citations (2)
Title |
---|
HERMSEN S 等: "Using feedback through digital technology to disrupt and change habitual behavior: A critical review of current literature", 《COMPUTERS IN HUMAN BEHAVIOR》 * |
叶强: "柔性力敏传感在人体运动信息获取和反馈训练中的应用研究", 《中国优秀博硕士学位论文全文数据库(博士)信息科技辑》 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114242204A (en) * | 2021-12-24 | 2022-03-25 | 珠海格力电器股份有限公司 | Motion strategy determination method and device |
Also Published As
Publication number | Publication date |
---|---|
CN113707271B (en) | 2022-02-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Velloso et al. | Qualitative activity recognition of weight lifting exercises | |
Díaz-Pereira et al. | Automatic recognition and scoring of olympic rhythmic gymnastic movements | |
WO2017161734A1 (en) | Correction of human body movements via television and motion-sensing accessory and system | |
CN110490109A (en) | A kind of online human body recovery action identification method based on monocular vision | |
US11854306B1 (en) | Fitness action recognition model, method of training model, and method of recognizing fitness action | |
CN113707271B (en) | Fitness scheme generation method and system based on artificial intelligence and big data | |
US20230149774A1 (en) | Handle Motion Counting Method and Terminal | |
Omarov et al. | A Novel Deep Neural Network to Analyze and Monitoring the Physical Training Relation to Sports Activities | |
Chariar et al. | AI trainer: Autoencoder based approach for squat analysis and correction | |
Cheng et al. | Periodic physical activity information segmentation, counting and recognition from video | |
US20240042281A1 (en) | User experience platform for connected fitness systems | |
Karunaratne et al. | Objectively measure player performance on Olympic weightlifting | |
Murthy et al. | Divenet: Dive action localization and physical pose parameter extraction for high performance training | |
Zhang et al. | A hybrid neural network-based intelligent body posture estimation system in sports scenes | |
Torres et al. | Detection of proper form on upper limb strength training using extremely randomized trees for joint positions | |
Stapel | Automated Grade Classification and Route Generation with Affordances on Climbing Training Boards | |
Ren et al. | Multistream adaptive attention-enhanced graph convolutional networks for youth fencing footwork training | |
Rahman et al. | Physical Exercise Classification from Body Keypoints Using Machine Learning Techniques | |
Liang et al. | Research on Fitness Action Evaluation System Based on Skeleton | |
Malawski et al. | Automatic analysis of techniques and body motion patterns in sport | |
Vats et al. | Towards Enhanced Gym Safety and Efficiency: CNN-LSTM Models for Identifying Leg Builder Workouts | |
Chen | 3D Convolutional Neural Networks based Movement Evaluation System for Gymnasts in Computer Vision Applications | |
Madake et al. | Vision-Based Squat Correctness System | |
Riccio | Real-Time Fitness Exercise Classification and Counting from Video Frames | |
Romano et al. | Analisi comparativa tra paradigmi algoritmici, di apprendimento automatico e visuale per la rilevazione automatica dell'origine percepita del movimento del corpo umano completo. |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |