Disclosure of Invention
In order to overcome the technical problems, the invention aims to provide a human-type robot multi-mode training method and system based on augmented reality, which solve the problems that the traditional human-type robot training method is mostly based on single sensor data judgment, has the defects of limited monitoring range and low accuracy, and is difficult to realize high-efficiency training in complex and changeable scenes.
The aim of the invention can be achieved by the following technical scheme:
an augmented reality-based multi-modal training system for a human robot, comprising:
The information processing module is used for obtaining the training abnormal coefficient XLI according to the training monitoring information and sending the training abnormal coefficient XLI to the classification judging module;
the specific process of obtaining the training abnormal coefficient XLI by the information processing module is as follows:
Carrying out quantization operation on the numerical values of the time information SJ, the movement information YD and the movement information MO according to a preset information processing function to obtain a training abnormal coefficient XLI, and sending the training abnormal coefficient XLI to a classification judging module;
Wherein the information processing function is as follows:
Wherein:
kappa is a preset error adjustment factor, taking kappa = 0.938;
pi and e are both mathematical constants;
x1, x2 and x3 are preset weight factors corresponding to the set time information SJ, the movement information YD and the movement information MO respectively, and x1, x2 and x3 satisfy x2> x3> x1>1.594, taking x1=1.88, x2=3.07 and x3=2.41;
the classification judgment module is used for classifying the training scene i into a training abnormal scene and a training qualified scene according to the training abnormal coefficient XLI, sending the training abnormal scene to the result display module, acquiring training evaluation information, acquiring a training evaluation value XP according to the training evaluation information, generating a training unqualified instruction or a training qualified instruction according to the training evaluation value XP, and sending the training unqualified instruction or the training qualified instruction to the result display module, wherein the training evaluation information comprises an abnormal value YB and a training average value XJ;
and the result display module is used for displaying the training abnormal scene, the training unqualified instruction and the training qualified instruction.
The invention further provides a specific process of classifying the training scene i by the classification judging module, wherein the specific process is as follows:
comparing the training anomaly coefficient XLI with a preset training anomaly threshold XLy, wherein the comparison result is as follows:
If the training anomaly coefficient XLI is more than or equal to the training anomaly threshold XLy, marking a training scene i corresponding to the training anomaly coefficient XLI as a training anomaly scene, and sending the training anomaly scene to a result display module;
if the training anomaly coefficient XLI < the training anomaly threshold XLy, marking the training scene i corresponding to the training anomaly coefficient XLI as a training qualified scene.
The invention further provides a further scheme that the specific process of generating the training unqualified instruction or the training qualified instruction by the classification judging module is as follows:
After the human-type robot finishes training operation according to all training scenes i, acquiring the ratio of the number of training abnormal scenes to the number of training scenes i, and marking the ratio as an abnormal ratio YB;
after the human-type robot finishes training operation according to all training scenes i, acquiring the average value of all training abnormal coefficients XLI, and marking the average value as a training average value XJ;
obtaining the product of the different ratio YB and the training average XJ, and marking the product as a training evaluation value XP;
comparing the training evaluation value XP with a preset training evaluation threshold XPy, wherein the comparison result is as follows:
If the training evaluation value XP is more than or equal to the training evaluation threshold XPy, generating a training unqualified instruction and sending the training unqualified instruction to a result display module;
If the training evaluation value XP is less than the training evaluation threshold XPy, generating a training qualified instruction and sending the training qualified instruction to a result display module.
As a further scheme of the invention, the human-type robot multi-mode training system based on the augmented reality further comprises:
the scene construction module is used for constructing a training scene i and sending the training scene i to the multi-modal training platform.
The specific process of constructing the training scene i by the scene construction module is as follows:
A plurality of highly realistic virtual training environments are constructed by using an augmented reality technology and are marked as training scenes i, i=1, &..the use of the augmented reality technology, n is a positive integer, i is the number of any one virtual training environment, n is the total number of virtual training environments, and the training scene i is sent to the multi-modal training platform.
As a further scheme of the invention, the human-type robot multi-mode training system based on the augmented reality further comprises:
The multi-mode training platform is used for a user to select a training scene i, generate a training control instruction at the same time and send the training control instruction to the humanoid robot.
As a further scheme of the invention, the human-type robot multi-mode training system based on the augmented reality further comprises:
and the human robot is used for performing training operation according to the training scene i after receiving the training control instruction, generating an information acquisition instruction at the same time, and sending the information acquisition instruction to the information acquisition module.
As a further scheme of the invention, the human-type robot multi-mode training system based on the augmented reality further comprises:
The information acquisition module is used for acquiring training monitoring information after acquiring the information acquisition instruction and sending the training monitoring information to the information processing module, wherein the training monitoring information comprises time information SJ, movement information YD and movement information MO.
The specific process of the information acquisition module for acquiring the training monitoring information is as follows:
Acquiring a generation time of a training control instruction and a time of a human-type robot receiving the training control instruction after acquiring the information acquisition instruction, acquiring a time difference between the generation time and the time of the human-type robot receiving the training control instruction, marking the time difference as a time receiving value JS, acquiring a time difference between the time of the human-type robot receiving the training control instruction and the time of the human-type robot performing training operation, marking the time difference as an operation value CS, performing quantization processing on the time receiving value JS and the operation value CS, respectively multiplying the values of the time receiving value JS and the operation value CS by corresponding preset proportionality coefficients to obtain a sum of the time receiving value JS and the operation value CS, and marking the sum as time information SJ, wherein the preset proportionality coefficients corresponding to the time receiving value JS and the operation value CS are s1 and s2 respectively, and s1 and s2 meet the conditions of s1+s2=1, 0< s1< s2<1, s1=0.38, and s2=0.62;
Acquiring a moving track of a human robot for training operation, marking the moving track as an actual moving track, acquiring a difference value between the length of the actual moving track and the length of a preset standard moving track, marking the difference value as a length value CD, overlapping the starting points of the actual moving track and the preset standard moving track, acquiring the non-overlapping track length of the actual moving track and the preset standard moving track, marking the non-overlapping track length as a non-heavy value FC, overlapping the starting points of the actual moving track and the preset standard moving track, connecting the end points of the actual moving track and the preset standard moving track by using line segments, acquiring the area of an area enclosed between the actual moving track and the preset standard moving track, marking the area as an area value MJ, carrying out quantization treatment on the length value CD, the non-heavy value FC and the area value MJ, respectively marking the values of the length value CD, the non-heavy value FC and the area value MJ as indexes of a base number e, respectively marking the length power value, the non-heavy power value FC and the area value as the indexes of the base number e, respectively multiplying the length value, the non-heavy value and the area value by the corresponding preset power coefficient and the area value to obtain the arithmetic power coefficient and the movement information of the length value; wherein e is a mathematical constant, preset proportionality coefficients corresponding to a length power value, a non-power value and an area power value are d1, d2 and d3 respectively, and d1, d2 and d3 meet d1+d2+d3=1, 0< d1< d2< d3<1, d1=0.21, d2=0.35 and d3=0.44;
Acquiring all preset monitoring joints on a humanoid robot, marking the monitoring joints as monitoring objects j, j=1 and..the number of any one of the monitoring joints, m being a positive integer, j being the number of the monitoring joints, m being the total number of the monitoring joints, acquiring the motion trail of the monitoring object j for training operation, marking the motion trail as an actual motion trail YGj, acquiring the difference between the length of the actual motion trail YGj and the length of a preset standard motion trail YGb, marking the difference as a long difference value CCi, acquiring the maximum long difference value CCi, marking the difference as a maximum value ZC, acquiring the sum of all the long difference values CCi, marking the sum of the long difference values as a total length value TC, acquiring the product of the maximum value ZC and the total length value TC, and marking the difference as motion information MO;
the time information SJ, the movement information YD, and the movement information MO are transmitted to the information processing module.
As a further scheme of the invention, the human robot multi-mode training method based on the augmented reality comprises the following steps:
The scene construction module constructs a training scene i and sends the training scene i to the multi-modal training platform;
step two, a user selects a training scene i by utilizing a multi-mode training platform, and simultaneously generates a training control instruction and sends the training control instruction to the humanoid robot;
Step three, the human-based robot performs training operation according to a training scene i after receiving the training control instruction, generates an information acquisition instruction at the same time, and sends the information acquisition instruction to an information acquisition module;
The information acquisition module acquires training monitoring information after acquiring an information acquisition instruction, wherein the training monitoring information comprises time information SJ, movement information YD and movement information MO, and the training monitoring information is sent to the information processing module;
the information processing module obtains a training abnormal coefficient XLI according to the training monitoring information and sends the training abnormal coefficient XLI to the classification judging module;
the classification judgment module classifies the training scene i into a training abnormal scene and a training qualified scene according to the training abnormal coefficient XLI, sends the training abnormal scene to the result display module, and acquires training evaluation information, wherein the training evaluation information comprises an abnormal value YB and a training average value XJ, acquires a training evaluation value XP according to the training evaluation information, generates a training unqualified instruction or a training qualified instruction according to the training evaluation value XP, and sends the training unqualified instruction or the training qualified instruction to the result display module;
And step seven, the result display module displays according to the training abnormal scene, the training unqualified instruction and the training qualified instruction.
The invention has the beneficial effects that:
According to the human-type robot multi-mode training method and system based on the augmented reality, training operation is carried out according to training scenes, data acquisition and analysis are carried out on the training operation process, training monitoring information is obtained, the abnormal degree of the training operation process can be comprehensively measured according to training abnormal coefficients obtained by the training monitoring information, the larger the training abnormal coefficients are, the higher the abnormal degree is indicated, then training scenes with abnormal training operation are screened out, then all training scenes are subjected to overall evaluation, training evaluation values are obtained, the overall abnormal degree of the training scenes can be comprehensively measured by the training evaluation values, the larger the training evaluation values are, the higher the overall abnormal degree is indicated, and finally result display is carried out;
According to the invention, the multi-mode virtual training environment is constructed by adopting the augmented reality technology, the diversification and customization of the training scene of the robot are realized, the data acquisition and analysis are carried out on the training operation process, the training operation state of the human robot can be comprehensively reflected, the accuracy and the comprehensiveness of anomaly monitoring are improved, the abnormal condition of the training operation can be timely found, the overall evaluation of the multi-mode virtual training environment is realized, the training efficiency and the accuracy are improved, the comprehensive improvement of the training capability of the human robot is realized, and the powerful support is provided for the wide application of the robot in a plurality of fields of education, medical treatment and service.
Detailed Description
The technical solutions of the embodiments of the present invention will be clearly and completely described below in conjunction with the embodiments of the present invention, and it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Example 1:
Referring to fig. 1, the embodiment is a multi-modal training system of a humanoid robot based on augmented reality, which comprises a scene construction module, a multi-modal training platform, a humanoid robot, an information acquisition module, an information processing module, a classification judgment module and a result display module;
the scene construction module is used for constructing a training scene i and sending the training scene i to the multi-modal training platform;
The multi-mode training platform is used for a user to select a training scene i, generating a training control instruction at the same time and sending the training control instruction to the humanoid robot;
The human-type robot is used for performing training operation according to a training scene i after receiving the training control instruction, generating an information acquisition instruction at the same time, and sending the information acquisition instruction to the information acquisition module;
The information acquisition module is used for acquiring training monitoring information after acquiring an information acquisition instruction and sending the training monitoring information to the information processing module, wherein the training monitoring information comprises time information SJ, movement information YD and movement information MO;
the information processing module is used for obtaining a training abnormal coefficient XLI according to training monitoring information and sending the training abnormal coefficient XLI to the classification judging module;
The classification judging module is used for classifying a training scene i into a training abnormal scene and a training qualified scene according to a training abnormal coefficient XLI, sending the training abnormal scene to the result display module, acquiring training evaluation information, acquiring a training evaluation value XP according to the training evaluation information, generating a training unqualified instruction or a training qualified instruction according to the training evaluation value XP, and sending the training unqualified instruction or the training qualified instruction to the result display module, wherein the training evaluation information comprises an abnormal value YB and a training average value XJ;
the result display module is used for displaying training abnormal scenes, training unqualified instructions and training qualified instructions.
Example 2:
Referring to fig. 2, the embodiment is a multi-modal training method of a human-type robot based on augmented reality, comprising the following steps:
The scene construction module constructs a training scene i and sends the training scene i to the multi-modal training platform;
step two, a user selects a training scene i by utilizing a multi-mode training platform, and simultaneously generates a training control instruction and sends the training control instruction to the humanoid robot;
Step three, the human-based robot performs training operation according to a training scene i after receiving the training control instruction, generates an information acquisition instruction at the same time, and sends the information acquisition instruction to an information acquisition module;
The information acquisition module acquires training monitoring information after acquiring an information acquisition instruction, wherein the training monitoring information comprises time information SJ, movement information YD and movement information MO, and the training monitoring information is sent to the information processing module;
the information processing module obtains a training abnormal coefficient XLI according to the training monitoring information and sends the training abnormal coefficient XLI to the classification judging module;
the classification judgment module classifies the training scene i into a training abnormal scene and a training qualified scene according to the training abnormal coefficient XLI, sends the training abnormal scene to the result display module, and acquires training evaluation information, wherein the training evaluation information comprises an abnormal value YB and a training average value XJ, acquires a training evaluation value XP according to the training evaluation information, generates a training unqualified instruction or a training qualified instruction according to the training evaluation value XP, and sends the training unqualified instruction or the training qualified instruction to the result display module;
And step seven, the result display module displays according to the training abnormal scene, the training unqualified instruction and the training qualified instruction.
Example 3:
based on any of the above embodiments, embodiment 3 of the present invention is a scene building module, where the role of the scene building module is to build a training scene i, and the specific process is as follows:
the scene construction module utilizes the augmented reality technology to construct a number of highly realistic virtual training environments, which are labeled in turn as training scenes i, i=1, &..the., n, n is a positive integer, i is the number of any one virtual training environment, n is the total number of the virtual training environments, and the training scene i is sent to a multi-modal training platform; the virtual training environment comprises various scenes such as families, offices, schools, hospitals, markets, factories and the like.
Example 4:
Based on any one of the above embodiments, embodiment 4 of the present invention is a multi-modal training platform, and the multi-modal training platform is used for generating a training control instruction, and specifically includes the following steps:
Starting the humanoid robot, enabling a user to select a training scene i by utilizing the multi-mode training platform, generating a training control instruction at the same time, and sending the training control instruction to the humanoid robot.
Example 5:
based on any of the above embodiments, embodiment 5 of the present invention is a humanoid robot, and the humanoid robot is used for generating an information acquisition instruction, and specifically comprises the following steps:
After receiving the training control instruction, the humanoid robot performs training operation according to the training scene i, generates an information acquisition instruction at the same time, and sends the information acquisition instruction to the information acquisition module.
Example 6:
Based on any one of the above embodiments, embodiment 6 of the present invention is an information acquisition module, where the information acquisition module is used to acquire training monitoring information, where the training monitoring information includes time information SJ, movement information YD, and movement information MO, and the specific process is as follows:
The information acquisition module acquires the generation time of the training control instruction and the time when the human-type robot receives the training control instruction after acquiring the information acquisition instruction, acquires the time difference between the generation time and the time when the human-type robot receives the training control instruction, marks the time difference as a time receiving value JS, acquires the time difference between the time when the human-type robot receives the training control instruction and the time when the human-type robot performs training operation, marks the time difference as an operation value CS, performs quantization processing on the time receiving value JS and the operation value CS, respectively multiplies the values of the time receiving value JS and the operation value CS by corresponding preset proportionality coefficients to obtain the sum of the time receiving value JS and the operation value CS, and marks the sum as time information SJ, wherein the preset proportionality coefficients corresponding to the time receiving value JS and the operation value CS are s1 and s2 respectively, and s1 and s2 meet the conditions of s1+s2=1, 0< s1< s2<1, s1=0.38, and s2=0.62;
The information acquisition module acquires a moving track of the human-type robot for training operation, marks the moving track as an actual moving track, acquires a difference value between the length of the actual moving track and the length of a preset standard moving track, marks the difference value as a length value CD, coincides the actual moving track with the starting point of the preset standard moving track, acquires the length of a non-coinciding track of the actual moving track and the preset standard moving track, marks the non-coinciding track as a non-overlapping value FC, coincides the actual moving track with the starting point of the preset standard moving track, connects the actual moving track with the end point of the preset standard moving track by a line segment, obtaining the area of a region enclosed between an actual moving track and a preset standard moving track, marking the area as an area value MJ, carrying out quantization treatment on a length value CD, a non-heavy value FC and the area value MJ, respectively taking the values of the length value CD, the non-heavy value FC and the area value MJ as indexes of a base number e, respectively marking the values as a length power value, a non-heavy power value and an area power value, respectively multiplying the length power value, the non-heavy power value and the area power value by corresponding preset proportional coefficients, obtaining the arithmetic square root of the sum of the length value CD, the non-heavy value FC and the area value MJ, and marking the sum as moving information YD; wherein e is a mathematical constant, preset proportionality coefficients corresponding to a length power value, a non-power value and an area power value are d1, d2 and d3 respectively, and d1, d2 and d3 meet d1+d2+d3=1, 0< d1< d2< d3<1, d1=0.21, d2=0.35 and d3=0.44;
The information acquisition module acquires all preset monitoring joints on the humanoid robot, marks the monitoring joints as monitoring objects j, j=1 and the number of the monitoring joints in sequence, m and m are positive integers, j is the number of any one of the monitoring joints, m is the total number of the monitoring joints, acquires the motion track of the monitoring object j for training operation, marks the motion track as an actual motion track YGj, acquires the difference between the length of the actual motion track YGj and the length of a preset standard motion track YGb, marks the difference as a long difference value CCi, acquires the maximum long difference value CCi, marks the difference as a longest value ZC, acquires the sum of all the long difference values CCi, marks the sum of the longest value ZC and the total length value TC, and marks the difference as motion information MO;
The information acquisition module sends the time information SJ, the movement information YD, and the movement information MO to the information processing module.
Example 7:
Based on any of the above embodiments, embodiment 7 of the present invention is an information processing module, where the information processing module is used to obtain a training anomaly coefficient XLi, and the specific process is as follows:
The information processing module carries out quantization operation on the numerical values of the time information SJ, the movement information YD and the movement information MO according to a preset information processing function to obtain a training abnormal coefficient XLI, and sends the training abnormal coefficient XLI to the classification judging module;
Wherein the information processing function is as follows:
Wherein:
kappa is a preset error adjustment factor, taking kappa = 0.938;
pi and e are both mathematical constants;
x1, x2 and x3 are preset weight factors corresponding to the set time information SJ, the movement information YD and the movement information MO, respectively, and x1, x2 and x3 satisfy x2> x3> x1>1.594, taking x1=1.88, x2=3.07 and x3=2.41.
Example 8:
based on any of the above embodiments, embodiment 8 of the present invention is a classification judgment module, which is used for classifying a training scene i and generating a training unqualified instruction or a training qualified instruction, and specifically includes the following steps:
the classification judgment module compares the training anomaly coefficient XLI with a preset training anomaly threshold XLy, and the comparison result is as follows:
If the training anomaly coefficient XLI is more than or equal to the training anomaly threshold XLy, marking a training scene i corresponding to the training anomaly coefficient XLI as a training anomaly scene, and sending the training anomaly scene to a result display module;
If the training anomaly coefficient XLI is smaller than the training anomaly threshold XLy, marking a training scene i corresponding to the training anomaly coefficient XLI as a training qualified scene;
the classification judgment module acquires the ratio of the number of training abnormal scenes to the number of training scenes i after the humanoid robot completes training operation according to all training scenes i, and marks the ratio as an abnormal ratio YB;
The classification judgment module acquires the average value of all training abnormal coefficients XLI after the human-type robot completes training operation according to all training scenes i, and marks the average value as a training average value XJ;
the classification judging module obtains the product of the different ratio YB and the training average value XJ and marks the product as a training evaluation value XP;
The classification judgment module compares the training evaluation value XP with a preset training evaluation threshold XPy, and the comparison result is as follows:
If the training evaluation value XP is more than or equal to the training evaluation threshold XPy, generating a training unqualified instruction and sending the training unqualified instruction to a result display module;
If the training evaluation value XP is less than the training evaluation threshold XPy, generating a training qualified instruction and sending the training qualified instruction to a result display module.
Example 9:
Based on any of the above embodiments, embodiment 9 of the present invention is a result display module, and the function of the result display module is to display, and the specific process is as follows:
the result display module displays the training abnormal scene;
The result display module displays a word of 'training unqualified' after receiving the training unqualified instruction;
And the result display module displays a word pattern of 'training qualification' after receiving the training qualification instruction.
Based on the above embodiments 1-9, the working principle of the present invention is as follows:
According to the multi-mode training method and system for the humanoid robot based on augmented reality, a training scene is built through a scene building module, a user selects the training scene through a multi-mode training platform, meanwhile, a training control instruction is generated, training operation is performed according to the training scene after the humanoid robot receives the training control instruction, meanwhile, an information acquisition instruction is generated, training monitoring information is acquired after the information acquisition instruction is acquired through an information acquisition module, wherein the training monitoring information comprises time information, moving information and motion information, a training abnormal coefficient is acquired through an information processing module according to the training monitoring information, the training scene is classified into a training abnormal scene and a training qualified scene through a classification judging module according to the training abnormal coefficient, and training evaluation information is acquired, wherein the training evaluation information comprises an abnormal value and a training average value, training disqualification instruction or training qualification instruction is generated according to the training evaluation value, and the result display module displays according to the training abnormal scene, the training disqualification instruction and the training qualification instruction; the system performs training operation according to training scenes, performs data acquisition and analysis on the training operation process to acquire training monitoring information, and obtains training anomaly coefficients according to the training monitoring information, wherein the greater the training anomaly coefficients are, the higher the anomaly degree is, the later the training scenes with the training operation anomalies are screened out, then all the training scenes are subjected to overall evaluation to obtain training evaluation values, the overall anomaly degree of the training scenes can be comprehensively measured by the training evaluation values, the greater the training evaluation values are, the higher the overall anomaly degree is, the invention adopts the augmented reality technology to construct the multi-mode virtual training environment, realizes the diversification and customization of the training scene of the robot, collects and analyzes the data in the training operation process, can comprehensively reflect the training operation state of the humanoid robot, improves the accuracy and comprehensiveness of anomaly monitoring, can timely find the abnormal condition of the training operation, realizes the integral evaluation of the multi-mode virtual training environment, improves the training efficiency and accuracy, realizes the comprehensive promotion of the training capability of the humanoid robot, and provides powerful support for the wide application of the robot in a plurality of fields of education, medical treatment and service.
It should be further noted that, the above formulas are all obtained by a person skilled in the art according to his own working experience and collecting a large amount of data to perform software simulation, and a formula close to the true value is selected, and coefficients in the formulas are set by a person skilled in the art according to the actual situation.
In the description of the present specification, the descriptions of the terms "one embodiment," "example," "specific example," and the like, mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
The foregoing is merely illustrative and explanatory of the invention, as various modifications and additions may be made to the particular embodiments described, or in a similar manner, by those skilled in the art, without departing from the scope of the invention or exceeding the scope of the invention as defined in the claims.