[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN118809634B - Human robot multi-mode training method and system based on augmented reality - Google Patents

Human robot multi-mode training method and system based on augmented reality Download PDF

Info

Publication number
CN118809634B
CN118809634B CN202411297788.7A CN202411297788A CN118809634B CN 118809634 B CN118809634 B CN 118809634B CN 202411297788 A CN202411297788 A CN 202411297788A CN 118809634 B CN118809634 B CN 118809634B
Authority
CN
China
Prior art keywords
training
value
information
acquiring
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202411297788.7A
Other languages
Chinese (zh)
Other versions
CN118809634A (en
Inventor
罗翼鹏
袁梁
易洁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Wutong Technology Co ltd
Original Assignee
Sichuan Wutong Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Wutong Technology Co ltd filed Critical Sichuan Wutong Technology Co ltd
Priority to CN202411297788.7A priority Critical patent/CN118809634B/en
Publication of CN118809634A publication Critical patent/CN118809634A/en
Application granted granted Critical
Publication of CN118809634B publication Critical patent/CN118809634B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Feedback Control In General (AREA)
  • Manipulator (AREA)

Abstract

本发明涉及机器人技术领域,具体涉及一种基于扩展现实的人型机器人多模态训练方法及系统,用于解决传统的人型机器人训练方法多基于单一传感器数据判断,存在监测范围有限、准确性不高的缺点,难以在复杂多变的场景中实现高效训练的问题;本发明采用扩展现实技术构建多模态虚拟训练环境,实现了机器人训练场景的多样化和定制化,并对训练操作过程进行数据采集和分析,能够全面地反映人型机器人的训练操作状态,提高异常监测的准确性和全面性,并能够及时发现训练操作异常情况,并实现对多模态虚拟训练环境进行整体评价,提高了训练效率和准确度,实现了人型机器人训练能力的全面提升。

The present invention relates to the field of robot technology, and in particular to a multimodal training method and system for a humanoid robot based on extended reality, which is used to solve the problem that traditional humanoid robot training methods are mostly based on single sensor data judgment, have the disadvantages of limited monitoring range and low accuracy, and are difficult to achieve efficient training in complex and changeable scenes; the present invention adopts extended reality technology to construct a multimodal virtual training environment, realizes the diversification and customization of robot training scenes, and performs data collection and analysis on the training operation process, which can comprehensively reflect the training operation status of the humanoid robot, improve the accuracy and comprehensiveness of abnormal monitoring, and can timely discover abnormal training operation situations, and realize overall evaluation of the multimodal virtual training environment, thereby improving training efficiency and accuracy, and realizing comprehensive improvement of the training ability of the humanoid robot.

Description

Human robot multi-mode training method and system based on augmented reality
Technical Field
The invention relates to the technical field of robots, in particular to a human robot multi-mode training method and system based on augmented reality.
Background
The human robot has high flexibility and adaptability as an important branch in the robot field, and is widely applied to a plurality of fields such as industrial manufacturing, service industry, disaster relief and the like. With the rapid development of computer graphics and simulation technology, the augmented reality technology has become a bridge connecting virtual and real world, and provides a new possibility for robot training. However, in a complex training environment, a humanoid robot may face various uncertainties and abnormal conditions that not only affect the training effect but may also cause damage to the robot itself. However, the traditional humanoid robot training method is mostly based on single sensor data judgment, has the defects of limited monitoring range and low accuracy, and is difficult to realize high-efficiency training in complex and changeable scenes.
Disclosure of Invention
In order to overcome the technical problems, the invention aims to provide a human-type robot multi-mode training method and system based on augmented reality, which solve the problems that the traditional human-type robot training method is mostly based on single sensor data judgment, has the defects of limited monitoring range and low accuracy, and is difficult to realize high-efficiency training in complex and changeable scenes.
The aim of the invention can be achieved by the following technical scheme:
an augmented reality-based multi-modal training system for a human robot, comprising:
The information processing module is used for obtaining the training abnormal coefficient XLI according to the training monitoring information and sending the training abnormal coefficient XLI to the classification judging module;
the specific process of obtaining the training abnormal coefficient XLI by the information processing module is as follows:
Carrying out quantization operation on the numerical values of the time information SJ, the movement information YD and the movement information MO according to a preset information processing function to obtain a training abnormal coefficient XLI, and sending the training abnormal coefficient XLI to a classification judging module;
Wherein the information processing function is as follows:
Wherein:
kappa is a preset error adjustment factor, taking kappa = 0.938;
pi and e are both mathematical constants;
x1, x2 and x3 are preset weight factors corresponding to the set time information SJ, the movement information YD and the movement information MO respectively, and x1, x2 and x3 satisfy x2> x3> x1>1.594, taking x1=1.88, x2=3.07 and x3=2.41;
the classification judgment module is used for classifying the training scene i into a training abnormal scene and a training qualified scene according to the training abnormal coefficient XLI, sending the training abnormal scene to the result display module, acquiring training evaluation information, acquiring a training evaluation value XP according to the training evaluation information, generating a training unqualified instruction or a training qualified instruction according to the training evaluation value XP, and sending the training unqualified instruction or the training qualified instruction to the result display module, wherein the training evaluation information comprises an abnormal value YB and a training average value XJ;
and the result display module is used for displaying the training abnormal scene, the training unqualified instruction and the training qualified instruction.
The invention further provides a specific process of classifying the training scene i by the classification judging module, wherein the specific process is as follows:
comparing the training anomaly coefficient XLI with a preset training anomaly threshold XLy, wherein the comparison result is as follows:
If the training anomaly coefficient XLI is more than or equal to the training anomaly threshold XLy, marking a training scene i corresponding to the training anomaly coefficient XLI as a training anomaly scene, and sending the training anomaly scene to a result display module;
if the training anomaly coefficient XLI < the training anomaly threshold XLy, marking the training scene i corresponding to the training anomaly coefficient XLI as a training qualified scene.
The invention further provides a further scheme that the specific process of generating the training unqualified instruction or the training qualified instruction by the classification judging module is as follows:
After the human-type robot finishes training operation according to all training scenes i, acquiring the ratio of the number of training abnormal scenes to the number of training scenes i, and marking the ratio as an abnormal ratio YB;
after the human-type robot finishes training operation according to all training scenes i, acquiring the average value of all training abnormal coefficients XLI, and marking the average value as a training average value XJ;
obtaining the product of the different ratio YB and the training average XJ, and marking the product as a training evaluation value XP;
comparing the training evaluation value XP with a preset training evaluation threshold XPy, wherein the comparison result is as follows:
If the training evaluation value XP is more than or equal to the training evaluation threshold XPy, generating a training unqualified instruction and sending the training unqualified instruction to a result display module;
If the training evaluation value XP is less than the training evaluation threshold XPy, generating a training qualified instruction and sending the training qualified instruction to a result display module.
As a further scheme of the invention, the human-type robot multi-mode training system based on the augmented reality further comprises:
the scene construction module is used for constructing a training scene i and sending the training scene i to the multi-modal training platform.
The specific process of constructing the training scene i by the scene construction module is as follows:
A plurality of highly realistic virtual training environments are constructed by using an augmented reality technology and are marked as training scenes i, i=1, &..the use of the augmented reality technology, n is a positive integer, i is the number of any one virtual training environment, n is the total number of virtual training environments, and the training scene i is sent to the multi-modal training platform.
As a further scheme of the invention, the human-type robot multi-mode training system based on the augmented reality further comprises:
The multi-mode training platform is used for a user to select a training scene i, generate a training control instruction at the same time and send the training control instruction to the humanoid robot.
As a further scheme of the invention, the human-type robot multi-mode training system based on the augmented reality further comprises:
and the human robot is used for performing training operation according to the training scene i after receiving the training control instruction, generating an information acquisition instruction at the same time, and sending the information acquisition instruction to the information acquisition module.
As a further scheme of the invention, the human-type robot multi-mode training system based on the augmented reality further comprises:
The information acquisition module is used for acquiring training monitoring information after acquiring the information acquisition instruction and sending the training monitoring information to the information processing module, wherein the training monitoring information comprises time information SJ, movement information YD and movement information MO.
The specific process of the information acquisition module for acquiring the training monitoring information is as follows:
Acquiring a generation time of a training control instruction and a time of a human-type robot receiving the training control instruction after acquiring the information acquisition instruction, acquiring a time difference between the generation time and the time of the human-type robot receiving the training control instruction, marking the time difference as a time receiving value JS, acquiring a time difference between the time of the human-type robot receiving the training control instruction and the time of the human-type robot performing training operation, marking the time difference as an operation value CS, performing quantization processing on the time receiving value JS and the operation value CS, respectively multiplying the values of the time receiving value JS and the operation value CS by corresponding preset proportionality coefficients to obtain a sum of the time receiving value JS and the operation value CS, and marking the sum as time information SJ, wherein the preset proportionality coefficients corresponding to the time receiving value JS and the operation value CS are s1 and s2 respectively, and s1 and s2 meet the conditions of s1+s2=1, 0< s1< s2<1, s1=0.38, and s2=0.62;
Acquiring a moving track of a human robot for training operation, marking the moving track as an actual moving track, acquiring a difference value between the length of the actual moving track and the length of a preset standard moving track, marking the difference value as a length value CD, overlapping the starting points of the actual moving track and the preset standard moving track, acquiring the non-overlapping track length of the actual moving track and the preset standard moving track, marking the non-overlapping track length as a non-heavy value FC, overlapping the starting points of the actual moving track and the preset standard moving track, connecting the end points of the actual moving track and the preset standard moving track by using line segments, acquiring the area of an area enclosed between the actual moving track and the preset standard moving track, marking the area as an area value MJ, carrying out quantization treatment on the length value CD, the non-heavy value FC and the area value MJ, respectively marking the values of the length value CD, the non-heavy value FC and the area value MJ as indexes of a base number e, respectively marking the length power value, the non-heavy power value FC and the area value as the indexes of the base number e, respectively multiplying the length value, the non-heavy value and the area value by the corresponding preset power coefficient and the area value to obtain the arithmetic power coefficient and the movement information of the length value; wherein e is a mathematical constant, preset proportionality coefficients corresponding to a length power value, a non-power value and an area power value are d1, d2 and d3 respectively, and d1, d2 and d3 meet d1+d2+d3=1, 0< d1< d2< d3<1, d1=0.21, d2=0.35 and d3=0.44;
Acquiring all preset monitoring joints on a humanoid robot, marking the monitoring joints as monitoring objects j, j=1 and..the number of any one of the monitoring joints, m being a positive integer, j being the number of the monitoring joints, m being the total number of the monitoring joints, acquiring the motion trail of the monitoring object j for training operation, marking the motion trail as an actual motion trail YGj, acquiring the difference between the length of the actual motion trail YGj and the length of a preset standard motion trail YGb, marking the difference as a long difference value CCi, acquiring the maximum long difference value CCi, marking the difference as a maximum value ZC, acquiring the sum of all the long difference values CCi, marking the sum of the long difference values as a total length value TC, acquiring the product of the maximum value ZC and the total length value TC, and marking the difference as motion information MO;
the time information SJ, the movement information YD, and the movement information MO are transmitted to the information processing module.
As a further scheme of the invention, the human robot multi-mode training method based on the augmented reality comprises the following steps:
The scene construction module constructs a training scene i and sends the training scene i to the multi-modal training platform;
step two, a user selects a training scene i by utilizing a multi-mode training platform, and simultaneously generates a training control instruction and sends the training control instruction to the humanoid robot;
Step three, the human-based robot performs training operation according to a training scene i after receiving the training control instruction, generates an information acquisition instruction at the same time, and sends the information acquisition instruction to an information acquisition module;
The information acquisition module acquires training monitoring information after acquiring an information acquisition instruction, wherein the training monitoring information comprises time information SJ, movement information YD and movement information MO, and the training monitoring information is sent to the information processing module;
the information processing module obtains a training abnormal coefficient XLI according to the training monitoring information and sends the training abnormal coefficient XLI to the classification judging module;
the classification judgment module classifies the training scene i into a training abnormal scene and a training qualified scene according to the training abnormal coefficient XLI, sends the training abnormal scene to the result display module, and acquires training evaluation information, wherein the training evaluation information comprises an abnormal value YB and a training average value XJ, acquires a training evaluation value XP according to the training evaluation information, generates a training unqualified instruction or a training qualified instruction according to the training evaluation value XP, and sends the training unqualified instruction or the training qualified instruction to the result display module;
And step seven, the result display module displays according to the training abnormal scene, the training unqualified instruction and the training qualified instruction.
The invention has the beneficial effects that:
According to the human-type robot multi-mode training method and system based on the augmented reality, training operation is carried out according to training scenes, data acquisition and analysis are carried out on the training operation process, training monitoring information is obtained, the abnormal degree of the training operation process can be comprehensively measured according to training abnormal coefficients obtained by the training monitoring information, the larger the training abnormal coefficients are, the higher the abnormal degree is indicated, then training scenes with abnormal training operation are screened out, then all training scenes are subjected to overall evaluation, training evaluation values are obtained, the overall abnormal degree of the training scenes can be comprehensively measured by the training evaluation values, the larger the training evaluation values are, the higher the overall abnormal degree is indicated, and finally result display is carried out;
According to the invention, the multi-mode virtual training environment is constructed by adopting the augmented reality technology, the diversification and customization of the training scene of the robot are realized, the data acquisition and analysis are carried out on the training operation process, the training operation state of the human robot can be comprehensively reflected, the accuracy and the comprehensiveness of anomaly monitoring are improved, the abnormal condition of the training operation can be timely found, the overall evaluation of the multi-mode virtual training environment is realized, the training efficiency and the accuracy are improved, the comprehensive improvement of the training capability of the human robot is realized, and the powerful support is provided for the wide application of the robot in a plurality of fields of education, medical treatment and service.
Drawings
The invention is further described below with reference to the accompanying drawings.
FIG. 1 is a schematic block diagram of a human-based multi-modal training system based on augmented reality in the present invention;
Fig. 2 is a flowchart of a multi-modal training method of a human-based robot based on augmented reality in the present invention.
Detailed Description
The technical solutions of the embodiments of the present invention will be clearly and completely described below in conjunction with the embodiments of the present invention, and it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Example 1:
Referring to fig. 1, the embodiment is a multi-modal training system of a humanoid robot based on augmented reality, which comprises a scene construction module, a multi-modal training platform, a humanoid robot, an information acquisition module, an information processing module, a classification judgment module and a result display module;
the scene construction module is used for constructing a training scene i and sending the training scene i to the multi-modal training platform;
The multi-mode training platform is used for a user to select a training scene i, generating a training control instruction at the same time and sending the training control instruction to the humanoid robot;
The human-type robot is used for performing training operation according to a training scene i after receiving the training control instruction, generating an information acquisition instruction at the same time, and sending the information acquisition instruction to the information acquisition module;
The information acquisition module is used for acquiring training monitoring information after acquiring an information acquisition instruction and sending the training monitoring information to the information processing module, wherein the training monitoring information comprises time information SJ, movement information YD and movement information MO;
the information processing module is used for obtaining a training abnormal coefficient XLI according to training monitoring information and sending the training abnormal coefficient XLI to the classification judging module;
The classification judging module is used for classifying a training scene i into a training abnormal scene and a training qualified scene according to a training abnormal coefficient XLI, sending the training abnormal scene to the result display module, acquiring training evaluation information, acquiring a training evaluation value XP according to the training evaluation information, generating a training unqualified instruction or a training qualified instruction according to the training evaluation value XP, and sending the training unqualified instruction or the training qualified instruction to the result display module, wherein the training evaluation information comprises an abnormal value YB and a training average value XJ;
the result display module is used for displaying training abnormal scenes, training unqualified instructions and training qualified instructions.
Example 2:
Referring to fig. 2, the embodiment is a multi-modal training method of a human-type robot based on augmented reality, comprising the following steps:
The scene construction module constructs a training scene i and sends the training scene i to the multi-modal training platform;
step two, a user selects a training scene i by utilizing a multi-mode training platform, and simultaneously generates a training control instruction and sends the training control instruction to the humanoid robot;
Step three, the human-based robot performs training operation according to a training scene i after receiving the training control instruction, generates an information acquisition instruction at the same time, and sends the information acquisition instruction to an information acquisition module;
The information acquisition module acquires training monitoring information after acquiring an information acquisition instruction, wherein the training monitoring information comprises time information SJ, movement information YD and movement information MO, and the training monitoring information is sent to the information processing module;
the information processing module obtains a training abnormal coefficient XLI according to the training monitoring information and sends the training abnormal coefficient XLI to the classification judging module;
the classification judgment module classifies the training scene i into a training abnormal scene and a training qualified scene according to the training abnormal coefficient XLI, sends the training abnormal scene to the result display module, and acquires training evaluation information, wherein the training evaluation information comprises an abnormal value YB and a training average value XJ, acquires a training evaluation value XP according to the training evaluation information, generates a training unqualified instruction or a training qualified instruction according to the training evaluation value XP, and sends the training unqualified instruction or the training qualified instruction to the result display module;
And step seven, the result display module displays according to the training abnormal scene, the training unqualified instruction and the training qualified instruction.
Example 3:
based on any of the above embodiments, embodiment 3 of the present invention is a scene building module, where the role of the scene building module is to build a training scene i, and the specific process is as follows:
the scene construction module utilizes the augmented reality technology to construct a number of highly realistic virtual training environments, which are labeled in turn as training scenes i, i=1, &..the., n, n is a positive integer, i is the number of any one virtual training environment, n is the total number of the virtual training environments, and the training scene i is sent to a multi-modal training platform; the virtual training environment comprises various scenes such as families, offices, schools, hospitals, markets, factories and the like.
Example 4:
Based on any one of the above embodiments, embodiment 4 of the present invention is a multi-modal training platform, and the multi-modal training platform is used for generating a training control instruction, and specifically includes the following steps:
Starting the humanoid robot, enabling a user to select a training scene i by utilizing the multi-mode training platform, generating a training control instruction at the same time, and sending the training control instruction to the humanoid robot.
Example 5:
based on any of the above embodiments, embodiment 5 of the present invention is a humanoid robot, and the humanoid robot is used for generating an information acquisition instruction, and specifically comprises the following steps:
After receiving the training control instruction, the humanoid robot performs training operation according to the training scene i, generates an information acquisition instruction at the same time, and sends the information acquisition instruction to the information acquisition module.
Example 6:
Based on any one of the above embodiments, embodiment 6 of the present invention is an information acquisition module, where the information acquisition module is used to acquire training monitoring information, where the training monitoring information includes time information SJ, movement information YD, and movement information MO, and the specific process is as follows:
The information acquisition module acquires the generation time of the training control instruction and the time when the human-type robot receives the training control instruction after acquiring the information acquisition instruction, acquires the time difference between the generation time and the time when the human-type robot receives the training control instruction, marks the time difference as a time receiving value JS, acquires the time difference between the time when the human-type robot receives the training control instruction and the time when the human-type robot performs training operation, marks the time difference as an operation value CS, performs quantization processing on the time receiving value JS and the operation value CS, respectively multiplies the values of the time receiving value JS and the operation value CS by corresponding preset proportionality coefficients to obtain the sum of the time receiving value JS and the operation value CS, and marks the sum as time information SJ, wherein the preset proportionality coefficients corresponding to the time receiving value JS and the operation value CS are s1 and s2 respectively, and s1 and s2 meet the conditions of s1+s2=1, 0< s1< s2<1, s1=0.38, and s2=0.62;
The information acquisition module acquires a moving track of the human-type robot for training operation, marks the moving track as an actual moving track, acquires a difference value between the length of the actual moving track and the length of a preset standard moving track, marks the difference value as a length value CD, coincides the actual moving track with the starting point of the preset standard moving track, acquires the length of a non-coinciding track of the actual moving track and the preset standard moving track, marks the non-coinciding track as a non-overlapping value FC, coincides the actual moving track with the starting point of the preset standard moving track, connects the actual moving track with the end point of the preset standard moving track by a line segment, obtaining the area of a region enclosed between an actual moving track and a preset standard moving track, marking the area as an area value MJ, carrying out quantization treatment on a length value CD, a non-heavy value FC and the area value MJ, respectively taking the values of the length value CD, the non-heavy value FC and the area value MJ as indexes of a base number e, respectively marking the values as a length power value, a non-heavy power value and an area power value, respectively multiplying the length power value, the non-heavy power value and the area power value by corresponding preset proportional coefficients, obtaining the arithmetic square root of the sum of the length value CD, the non-heavy value FC and the area value MJ, and marking the sum as moving information YD; wherein e is a mathematical constant, preset proportionality coefficients corresponding to a length power value, a non-power value and an area power value are d1, d2 and d3 respectively, and d1, d2 and d3 meet d1+d2+d3=1, 0< d1< d2< d3<1, d1=0.21, d2=0.35 and d3=0.44;
The information acquisition module acquires all preset monitoring joints on the humanoid robot, marks the monitoring joints as monitoring objects j, j=1 and the number of the monitoring joints in sequence, m and m are positive integers, j is the number of any one of the monitoring joints, m is the total number of the monitoring joints, acquires the motion track of the monitoring object j for training operation, marks the motion track as an actual motion track YGj, acquires the difference between the length of the actual motion track YGj and the length of a preset standard motion track YGb, marks the difference as a long difference value CCi, acquires the maximum long difference value CCi, marks the difference as a longest value ZC, acquires the sum of all the long difference values CCi, marks the sum of the longest value ZC and the total length value TC, and marks the difference as motion information MO;
The information acquisition module sends the time information SJ, the movement information YD, and the movement information MO to the information processing module.
Example 7:
Based on any of the above embodiments, embodiment 7 of the present invention is an information processing module, where the information processing module is used to obtain a training anomaly coefficient XLi, and the specific process is as follows:
The information processing module carries out quantization operation on the numerical values of the time information SJ, the movement information YD and the movement information MO according to a preset information processing function to obtain a training abnormal coefficient XLI, and sends the training abnormal coefficient XLI to the classification judging module;
Wherein the information processing function is as follows:
Wherein:
kappa is a preset error adjustment factor, taking kappa = 0.938;
pi and e are both mathematical constants;
x1, x2 and x3 are preset weight factors corresponding to the set time information SJ, the movement information YD and the movement information MO, respectively, and x1, x2 and x3 satisfy x2> x3> x1>1.594, taking x1=1.88, x2=3.07 and x3=2.41.
Example 8:
based on any of the above embodiments, embodiment 8 of the present invention is a classification judgment module, which is used for classifying a training scene i and generating a training unqualified instruction or a training qualified instruction, and specifically includes the following steps:
the classification judgment module compares the training anomaly coefficient XLI with a preset training anomaly threshold XLy, and the comparison result is as follows:
If the training anomaly coefficient XLI is more than or equal to the training anomaly threshold XLy, marking a training scene i corresponding to the training anomaly coefficient XLI as a training anomaly scene, and sending the training anomaly scene to a result display module;
If the training anomaly coefficient XLI is smaller than the training anomaly threshold XLy, marking a training scene i corresponding to the training anomaly coefficient XLI as a training qualified scene;
the classification judgment module acquires the ratio of the number of training abnormal scenes to the number of training scenes i after the humanoid robot completes training operation according to all training scenes i, and marks the ratio as an abnormal ratio YB;
The classification judgment module acquires the average value of all training abnormal coefficients XLI after the human-type robot completes training operation according to all training scenes i, and marks the average value as a training average value XJ;
the classification judging module obtains the product of the different ratio YB and the training average value XJ and marks the product as a training evaluation value XP;
The classification judgment module compares the training evaluation value XP with a preset training evaluation threshold XPy, and the comparison result is as follows:
If the training evaluation value XP is more than or equal to the training evaluation threshold XPy, generating a training unqualified instruction and sending the training unqualified instruction to a result display module;
If the training evaluation value XP is less than the training evaluation threshold XPy, generating a training qualified instruction and sending the training qualified instruction to a result display module.
Example 9:
Based on any of the above embodiments, embodiment 9 of the present invention is a result display module, and the function of the result display module is to display, and the specific process is as follows:
the result display module displays the training abnormal scene;
The result display module displays a word of 'training unqualified' after receiving the training unqualified instruction;
And the result display module displays a word pattern of 'training qualification' after receiving the training qualification instruction.
Based on the above embodiments 1-9, the working principle of the present invention is as follows:
According to the multi-mode training method and system for the humanoid robot based on augmented reality, a training scene is built through a scene building module, a user selects the training scene through a multi-mode training platform, meanwhile, a training control instruction is generated, training operation is performed according to the training scene after the humanoid robot receives the training control instruction, meanwhile, an information acquisition instruction is generated, training monitoring information is acquired after the information acquisition instruction is acquired through an information acquisition module, wherein the training monitoring information comprises time information, moving information and motion information, a training abnormal coefficient is acquired through an information processing module according to the training monitoring information, the training scene is classified into a training abnormal scene and a training qualified scene through a classification judging module according to the training abnormal coefficient, and training evaluation information is acquired, wherein the training evaluation information comprises an abnormal value and a training average value, training disqualification instruction or training qualification instruction is generated according to the training evaluation value, and the result display module displays according to the training abnormal scene, the training disqualification instruction and the training qualification instruction; the system performs training operation according to training scenes, performs data acquisition and analysis on the training operation process to acquire training monitoring information, and obtains training anomaly coefficients according to the training monitoring information, wherein the greater the training anomaly coefficients are, the higher the anomaly degree is, the later the training scenes with the training operation anomalies are screened out, then all the training scenes are subjected to overall evaluation to obtain training evaluation values, the overall anomaly degree of the training scenes can be comprehensively measured by the training evaluation values, the greater the training evaluation values are, the higher the overall anomaly degree is, the invention adopts the augmented reality technology to construct the multi-mode virtual training environment, realizes the diversification and customization of the training scene of the robot, collects and analyzes the data in the training operation process, can comprehensively reflect the training operation state of the humanoid robot, improves the accuracy and comprehensiveness of anomaly monitoring, can timely find the abnormal condition of the training operation, realizes the integral evaluation of the multi-mode virtual training environment, improves the training efficiency and accuracy, realizes the comprehensive promotion of the training capability of the humanoid robot, and provides powerful support for the wide application of the robot in a plurality of fields of education, medical treatment and service.
It should be further noted that, the above formulas are all obtained by a person skilled in the art according to his own working experience and collecting a large amount of data to perform software simulation, and a formula close to the true value is selected, and coefficients in the formulas are set by a person skilled in the art according to the actual situation.
In the description of the present specification, the descriptions of the terms "one embodiment," "example," "specific example," and the like, mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
The foregoing is merely illustrative and explanatory of the invention, as various modifications and additions may be made to the particular embodiments described, or in a similar manner, by those skilled in the art, without departing from the scope of the invention or exceeding the scope of the invention as defined in the claims.

Claims (8)

1. An augmented reality-based multi-modal training system for a human-based robot, comprising:
the information acquisition module is used for acquiring training monitoring information after acquiring an information acquisition instruction and sending the training monitoring information to the information processing module, wherein the training monitoring information comprises time information SJ, movement information YD and movement information MO;
The specific process of the information acquisition module for acquiring the training monitoring information is as follows:
Acquiring a generation time of a training control instruction and a time of a human-type robot receiving the training control instruction after acquiring the information acquisition instruction, acquiring a time difference between the generation time and the time of the human-type robot receiving the training control instruction, marking the time difference as a time receiving value JS, acquiring a time difference between the time of the human-type robot receiving the training control instruction and the time of the human-type robot performing training operation, marking the time difference as an operation value CS, carrying out quantization processing on the time receiving value JS and the operation value CS, respectively multiplying the values of the time receiving value JS and the operation value CS by corresponding preset proportional coefficients to obtain a sum of the time receiving value JS and the operation value CS, and marking the sum as time information SJ, wherein the preset proportional coefficients corresponding to the time receiving value JS and the operation value CS are s1 and s2 respectively;
Acquiring a moving track of a human-type robot for training operation, marking the moving track as an actual moving track, acquiring a difference value between the length of the actual moving track and the length of a preset standard moving track, marking the difference value as a length value CD, overlapping the starting points of the actual moving track and the preset standard moving track, acquiring the non-overlapping track length of the actual moving track and the preset standard moving track, marking the non-overlapping track length as a non-heavy value FC, overlapping the starting points of the actual moving track and the preset standard moving track, connecting the end points of the actual moving track and the preset standard moving track by using line segments, acquiring the area of an area surrounded by the actual moving track and the preset standard moving track, marking the area as an area value MJ, carrying out quantization treatment on the length value CD, the non-heavy value FC and the area value MJ, respectively serving the numerical values of the length value CD, the non-heavy value FC and the area value MJ as exponents of a base e, respectively marking the length exponents, the non-heavy value and the area value as corresponding exponents of the length exponents, respectively, multiplying the length values and the area values by preset exponents respectively, and respectively multiplying the length values by preset exponents respectively, and the corresponding power coefficients of the length values and the area values respectively as the preset exponents, and the length coefficients respectively, and the length coefficients and 3 and the corresponding to the length and the 3 and the corresponding exponents respectively;
Acquiring all preset monitoring joints on a humanoid robot, marking the monitoring joints as monitoring objects j, j=1 and..the number of any one of the monitoring joints, m being a positive integer, j being the number of the monitoring joints, m being the total number of the monitoring joints, acquiring the motion trail of the monitoring object j for training operation, marking the motion trail as an actual motion trail YGj, acquiring the difference between the length of the actual motion trail YGj and the length of a preset standard motion trail YGb, marking the difference as a long difference value CCi, acquiring the maximum long difference value CCi, marking the difference as a maximum value ZC, acquiring the sum of all the long difference values CCi, marking the sum of the long difference values as a total length value TC, acquiring the product of the maximum value ZC and the total length value TC, and marking the difference as motion information MO;
Transmitting the time information SJ, the movement information YD and the movement information MO to an information processing module;
The information processing module is used for obtaining the training abnormal coefficient XLI according to the training monitoring information and sending the training abnormal coefficient XLI to the classification judging module;
the specific process of obtaining the training abnormal coefficient XLI by the information processing module is as follows:
carrying out quantization operation on the numerical values of the time information SJ, the movement information YD and the movement information MO according to a preset information processing function to obtain a training abnormal coefficient XLI;
Wherein the information processing function is as follows:
Wherein:
kappa is a preset error adjustment factor, taking kappa = 0.938;
pi and e are both mathematical constants;
x1, x2 and x3 are preset weight factors corresponding to the set time information SJ, the movement information YD and the movement information MO respectively;
the classification judgment module is used for classifying the training scene i into a training abnormal scene and a training qualified scene according to the training abnormal coefficient XLI, sending the training abnormal scene to the result display module, acquiring training evaluation information, acquiring a training evaluation value XP according to the training evaluation information, generating a training unqualified instruction or a training qualified instruction according to the training evaluation value XP, and sending the training unqualified instruction or the training qualified instruction to the result display module, wherein the training evaluation information comprises an abnormal value YB and a training average value XJ;
and the result display module is used for displaying the training abnormal scene, the training unqualified instruction and the training qualified instruction.
2. The multi-modal training system of a human-based robot according to claim 1, wherein the specific process of classifying the training scene i by the classification judging module is as follows:
comparing the training anomaly coefficient XLI with a preset training anomaly threshold XLy, wherein the comparison result is as follows:
If the training anomaly coefficient XLI is more than or equal to the training anomaly threshold XLy, marking a training scene i corresponding to the training anomaly coefficient XLI as a training anomaly scene, and sending the training anomaly scene to a result display module;
if the training anomaly coefficient XLI < the training anomaly threshold XLy, marking the training scene i corresponding to the training anomaly coefficient XLI as a training qualified scene.
3. The multi-modal training system of a human-based robot according to claim 1, wherein the specific process of generating the training failure instruction or the training failure instruction by the classification judgment module is as follows:
After the human-type robot finishes training operation according to all training scenes i, acquiring the ratio of the number of training abnormal scenes to the number of training scenes i, and marking the ratio as an abnormal ratio YB;
after the human-type robot finishes training operation according to all training scenes i, acquiring the average value of all training abnormal coefficients XLI, and marking the average value as a training average value XJ;
obtaining the product of the different ratio YB and the training average XJ, and marking the product as a training evaluation value XP;
comparing the training evaluation value XP with a preset training evaluation threshold XPy, wherein the comparison result is as follows:
If the training evaluation value XP is more than or equal to the training evaluation threshold XPy, generating a training unqualified instruction and sending the training unqualified instruction to a result display module;
If the training evaluation value XP is less than the training evaluation threshold XPy, generating a training qualified instruction and sending the training qualified instruction to a result display module.
4. The augmented reality-based multi-modal training system of a human robot of claim 1, further comprising:
the scene construction module is used for constructing a training scene i and sending the training scene i to the multi-modal training platform.
5. The augmented reality-based multi-modal training system of a human-type robot of claim 4, wherein the scene construction module constructs training scene i as follows:
A plurality of highly realistic virtual training environments are constructed by using an augmented reality technology and are marked as training scenes i, i=1, &..the use of the augmented reality technology, n is a positive integer, i is the number of any one virtual training environment, n is the total number of virtual training environments, and the training scene i is sent to the multi-modal training platform.
6. The augmented reality-based multi-modal training system of a human robot of claim 1, further comprising:
The multi-mode training platform is used for a user to select a training scene i, generate a training control instruction at the same time and send the training control instruction to the humanoid robot.
7. The augmented reality-based multi-modal training system of a human robot of claim 1, further comprising:
and the human robot is used for performing training operation according to the training scene i after receiving the training control instruction, generating an information acquisition instruction at the same time, and sending the information acquisition instruction to the information acquisition module.
8. The human robot multi-mode training method based on the augmented reality is characterized by comprising the following steps of:
The scene construction module constructs a training scene i and sends the training scene i to the multi-modal training platform;
step two, a user selects a training scene i by utilizing a multi-mode training platform, and simultaneously generates a training control instruction and sends the training control instruction to the humanoid robot;
Step three, the human-based robot performs training operation according to a training scene i after receiving the training control instruction, generates an information acquisition instruction at the same time, and sends the information acquisition instruction to an information acquisition module;
The information acquisition module acquires training monitoring information after acquiring an information acquisition instruction, wherein the training monitoring information comprises time information SJ, movement information YD and movement information MO, and the training monitoring information is sent to the information processing module;
The specific process of the information acquisition module for acquiring the training monitoring information is as follows:
Acquiring a generation time of a training control instruction and a time of a human-type robot receiving the training control instruction after acquiring the information acquisition instruction, acquiring a time difference between the generation time and the time of the human-type robot receiving the training control instruction, marking the time difference as a time receiving value JS, acquiring a time difference between the time of the human-type robot receiving the training control instruction and the time of the human-type robot performing training operation, marking the time difference as an operation value CS, carrying out quantization processing on the time receiving value JS and the operation value CS, respectively multiplying the values of the time receiving value JS and the operation value CS by corresponding preset proportional coefficients to obtain a sum of the time receiving value JS and the operation value CS, and marking the sum as time information SJ, wherein the preset proportional coefficients corresponding to the time receiving value JS and the operation value CS are s1 and s2 respectively;
Acquiring a moving track of a human-type robot for training operation, marking the moving track as an actual moving track, acquiring a difference value between the length of the actual moving track and the length of a preset standard moving track, marking the difference value as a length value CD, overlapping the starting points of the actual moving track and the preset standard moving track, acquiring the non-overlapping track length of the actual moving track and the preset standard moving track, marking the non-overlapping track length as a non-heavy value FC, overlapping the starting points of the actual moving track and the preset standard moving track, connecting the end points of the actual moving track and the preset standard moving track by using line segments, acquiring the area of an area surrounded by the actual moving track and the preset standard moving track, marking the area as an area value MJ, carrying out quantization treatment on the length value CD, the non-heavy value FC and the area value MJ, respectively serving the numerical values of the length value CD, the non-heavy value FC and the area value MJ as exponents of a base e, respectively marking the length exponents, the non-heavy value and the area value as corresponding exponents of the length exponents, respectively, multiplying the length values and the area values by preset exponents respectively, and respectively multiplying the length values by preset exponents respectively, and the corresponding power coefficients of the length values and the area values respectively as the preset exponents, and the length coefficients respectively, and the length coefficients and 3 and the corresponding to the length and the 3 and the corresponding exponents respectively;
Acquiring all preset monitoring joints on a humanoid robot, marking the monitoring joints as monitoring objects j, j=1 and..the number of any one of the monitoring joints, m being a positive integer, j being the number of the monitoring joints, m being the total number of the monitoring joints, acquiring the motion trail of the monitoring object j for training operation, marking the motion trail as an actual motion trail YGj, acquiring the difference between the length of the actual motion trail YGj and the length of a preset standard motion trail YGb, marking the difference as a long difference value CCi, acquiring the maximum long difference value CCi, marking the difference as a maximum value ZC, acquiring the sum of all the long difference values CCi, marking the sum of the long difference values as a total length value TC, acquiring the product of the maximum value ZC and the total length value TC, and marking the difference as motion information MO;
the information processing module obtains a training abnormal coefficient XLI according to the training monitoring information and sends the training abnormal coefficient XLI to the classification judging module;
the specific process of obtaining the training abnormal coefficient XLI by the information processing module is as follows:
carrying out quantization operation on the numerical values of the time information SJ, the movement information YD and the movement information MO according to a preset information processing function to obtain a training abnormal coefficient XLI;
Wherein the information processing function is as follows:
Wherein:
kappa is a preset error adjustment factor, taking kappa = 0.938;
pi and e are both mathematical constants;
x1, x2 and x3 are preset weight factors corresponding to the set time information SJ, the movement information YD and the movement information MO respectively;
the classification judgment module classifies the training scene i into a training abnormal scene and a training qualified scene according to the training abnormal coefficient XLI, sends the training abnormal scene to the result display module, and acquires training evaluation information, wherein the training evaluation information comprises an abnormal value YB and a training average value XJ, acquires a training evaluation value XP according to the training evaluation information, generates a training unqualified instruction or a training qualified instruction according to the training evaluation value XP, and sends the training unqualified instruction or the training qualified instruction to the result display module;
And step seven, the result display module displays according to the training abnormal scene, the training unqualified instruction and the training qualified instruction.
CN202411297788.7A 2024-09-18 2024-09-18 Human robot multi-mode training method and system based on augmented reality Active CN118809634B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202411297788.7A CN118809634B (en) 2024-09-18 2024-09-18 Human robot multi-mode training method and system based on augmented reality

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202411297788.7A CN118809634B (en) 2024-09-18 2024-09-18 Human robot multi-mode training method and system based on augmented reality

Publications (2)

Publication Number Publication Date
CN118809634A CN118809634A (en) 2024-10-22
CN118809634B true CN118809634B (en) 2024-12-03

Family

ID=93071480

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202411297788.7A Active CN118809634B (en) 2024-09-18 2024-09-18 Human robot multi-mode training method and system based on augmented reality

Country Status (1)

Country Link
CN (1) CN118809634B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102716000A (en) * 2012-06-29 2012-10-10 中国科学院自动化研究所 Seated horizontal type lower limb rehabilitation robot and corresponding assisting training control method
CN111890351A (en) * 2020-06-12 2020-11-06 深圳先进技术研究院 Robot and control method thereof, and computer-readable storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9566710B2 (en) * 2011-06-02 2017-02-14 Brain Corporation Apparatus and methods for operating robotic devices using selective state space training
US8996177B2 (en) * 2013-03-15 2015-03-31 Brain Corporation Robotic training apparatus and methods
JP6457473B2 (en) * 2016-12-16 2019-01-23 ファナック株式会社 Machine learning apparatus, robot system, and machine learning method for learning operation of robot and laser scanner
CN115936060B (en) * 2022-12-28 2024-03-26 四川物通科技有限公司 Substation capacitance temperature early warning method based on depth deterministic strategy gradient
CN117033598A (en) * 2023-08-16 2023-11-10 Oppo广东移动通信有限公司 Information processing method, device, robot and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102716000A (en) * 2012-06-29 2012-10-10 中国科学院自动化研究所 Seated horizontal type lower limb rehabilitation robot and corresponding assisting training control method
CN111890351A (en) * 2020-06-12 2020-11-06 深圳先进技术研究院 Robot and control method thereof, and computer-readable storage medium

Also Published As

Publication number Publication date
CN118809634A (en) 2024-10-22

Similar Documents

Publication Publication Date Title
Li et al. An evaluation of posture recognition based on intelligent rapid entire body assessment system for determining musculoskeletal disorders
CN110109535A (en) Augmented reality generation method and device
CN112819971A (en) Method, device, equipment and medium for generating virtual image
CN116993732B (en) A gap detection method, system and storage medium
CN113283334B (en) A classroom concentration analysis method, device and storage medium
CN114511924A (en) Semi-supervised bone action identification method based on self-adaptive augmentation and representation learning
Zou et al. Characteristics of models that impact transformation of BIMs to virtual environments to support facility management operations
Luo et al. Extreme random forest method for machine fault classification
CN117456597A (en) Skeleton behavior recognition method based on cross-dimensional interaction attention mechanical drawing convolution
CN118809634B (en) Human robot multi-mode training method and system based on augmented reality
CN112257773B (en) Fault diagnosis method of mechanical equipment based on wireless sensor network multi-measurement vector
CN113723072A (en) RPA (resilient packet Access) and AI (Artificial Intelligence) combined model fusion result acquisition method and device and electronic equipment
CN118570426A (en) Three-dimensional guiding operation method based on industrial scene
Zhong Convolutional Neural Network Model to Predict Outdoor Comfort UTCI Microclimate Map
Buchanan et al. On the effectiveness of conveying bim metadata in vr design reviews for healthcare architecture
CN115392137B (en) Three-dimensional simulation system based on karst water and soil coupling effect that sinks
Wang et al. Design of a four-axis robot arm system based on machine vision
CN115546893A (en) A cheerleading video evaluation visualization method and system
Gao et al. Critical Procedure Identification Method Considering the Key Quality Characteristics of the Product Manufacturing Process
Huang et al. Deep Learning Based a Novel Method of Classroom Behavior Recognition
Zeng et al. Motion capture and reconstruction based on depth information using Kinect
Jain et al. Gesnavi: gesture-guided outdoor vision-and-language navigation
CN117251896B (en) Digital twin learning scene reconstruction method and system based on context information
Xie et al. An improved model for target detection and pose estimation of a teleoperation power manipulator
CN116012296B (en) Detection method of assembled prefabricated components based on super-resolution and semi-supervised learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant