CN118544358A - Robot motion control method and device based on image recognition processing - Google Patents
Robot motion control method and device based on image recognition processing Download PDFInfo
- Publication number
- CN118544358A CN118544358A CN202410993058.4A CN202410993058A CN118544358A CN 118544358 A CN118544358 A CN 118544358A CN 202410993058 A CN202410993058 A CN 202410993058A CN 118544358 A CN118544358 A CN 118544358A
- Authority
- CN
- China
- Prior art keywords
- grabbing
- target
- test
- basic
- mechanical arm
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000012545 processing Methods 0.000 title claims abstract description 49
- 238000000034 method Methods 0.000 title claims abstract description 30
- 238000012360 testing method Methods 0.000 claims abstract description 91
- 238000004458 analytical method Methods 0.000 claims abstract description 28
- 238000007405 data analysis Methods 0.000 claims abstract description 18
- 238000013112 stability test Methods 0.000 claims abstract description 15
- 238000012795 verification Methods 0.000 claims description 18
- 230000007246 mechanism Effects 0.000 claims description 17
- 230000005856 abnormality Effects 0.000 claims description 12
- 238000004364 calculation method Methods 0.000 claims description 10
- 238000005516 engineering process Methods 0.000 claims description 9
- 238000012163 sequencing technique Methods 0.000 claims description 5
- 230000000694 effects Effects 0.000 abstract description 10
- 238000005457 optimization Methods 0.000 abstract description 7
- 230000002159 abnormal effect Effects 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000006399 behavior Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000002787 reinforcement Effects 0.000 description 1
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1656—Programme controls characterised by programming, planning systems for manipulators
- B25J9/1664—Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
Landscapes
- Engineering & Computer Science (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Manipulator (AREA)
Abstract
The invention discloses a robot motion control method and device based on image recognition processing, and belongs to the technical field of data recognition processing; the method and the device are used for solving the technical problems that the autonomous learning effect of the motion control of the mechanical arm is poor and the effect of the control data expansion optimization is poor in the existing scheme; the method comprises the steps of scanning and identifying basic information of a target to be grabbed by a mechanical arm, processing and obtaining a grabbing feature sequence corresponding to the target, carrying out data analysis on the grabbing feature sequence to obtain a corresponding grabbing scheme and grabbing implementation parameters, carrying out position image identification on the target to be grabbed, analyzing and obtaining basic position features corresponding to the target, controlling the mechanical arm to carry out stability test on the target to be grabbed by utilizing the basic grabbing implementation parameters obtained through analysis, carrying out dynamic management and control on follow-up grabbing of the grabbing target according to a test result, and carrying out dynamic management on basic grabbing implementation parameters to which the grabbing target belongs.
Description
Technical Field
The invention relates to the technical field of data identification processing, in particular to a robot motion control method and device based on image identification processing.
Background
Robot (typically represented as a robotic arm) motion control is a process that performs a specific task by precisely adjusting the angle or position of each joint to follow a predetermined trajectory and speed.
The existing mechanical arm motion control scheme has certain defects when being implemented, and can not autonomously perform data analysis to implement various grabbing and transferring schemes in the face of transferring targets with different specifications and different weights, and meanwhile, can not actively perform stability test and control of grabbing states on targets with incomplete control parameters, so that the autonomous learning effect of mechanical arm motion control is poor and the effect of control data expansion optimization is poor.
Disclosure of Invention
The invention aims to provide a robot motion control method and device based on image recognition processing, which are used for solving the technical problems of poor autonomous learning effect of mechanical arm motion control and poor effect of control data expansion optimization in the existing scheme.
The aim of the invention can be achieved by the following technical scheme:
A robot motion control method based on image recognition processing comprises the following steps:
the method comprises the steps of scanning and identifying basic information of a target to be grabbed by a mechanical arm through the internet of things technology, and processing and obtaining a grabbing feature sequence corresponding to the target;
carrying out data analysis on the grabbing feature sequences to obtain conventional grabbing implementation parameters or basic grabbing implementation parameters corresponding to different grabbing schemes;
Carrying out position image recognition on a target to be grabbed through a camera device arranged on the mechanical arm, and analyzing and obtaining basic position characteristics corresponding to the target;
The method comprises the steps of utilizing basic grabbing implementation parameters obtained through analysis to control a mechanical arm to grab a target to be grabbed, carrying out grabbing state stability test on the grabbed target, carrying out image processing identification on an obtained test image, and carrying out test analysis by combining basic position characteristics;
And dynamically controlling the subsequent grabbing of the grabbing target according to the test result, and dynamically managing the basic grabbing implementation parameters to which the grabbing target belongs.
Preferably, the base information contains the number, length, width, height, weight, type of protection and identification of the object;
extracting the values of the length, the weight and the protection mark and marking the values as a first grabbing feature, a second grabbing feature and a third grabbing feature respectively;
and sequencing and combining the plurality of grabbing features of the mark to obtain a grabbing feature sequence.
Preferably, different elements in the capture feature sequence are sequentially acquired and marked as ai, i=1, 2,3; sequentially inputting the elements obtained by the marking into a feature recognition model for data analysis and outputting a feature recognition value; wherein the feature identification value comprises a value of 0 or 1;
The expression of the feature recognition model is: ; in the formula, ui is a standard grabbing feature range corresponding to different grabbing features;
And arranging and combining the acquired plurality of characteristic recognition values according to the output sequence to obtain a characteristic recognition sequence.
Preferably, when determining the corresponding capturing implementation parameters according to the feature recognition sequence, acquiring all elements with the value of 0 and the positions of the elements corresponding to the elements in the feature recognition sequence;
If the values of all elements in the feature recognition sequence are 0, generating a conventional instruction, and acquiring a conventional clamping distance associated with a standard grabbing feature range to which a first element belongs and a conventional clamping force associated with a standard grabbing feature range to which a third element belongs according to the conventional instruction;
The numerical sequences of the conventional clamping distance and the conventional clamping force are combined to obtain conventional grabbing implementation parameters;
If the value of the second element is 0 and the total number of the elements with the value of 0 is less than three in the feature recognition sequence, generating a verification instruction, setting the value of the first element as a basic clamping distance according to the verification instruction, and acquiring a conventional clamping force associated with a standard grabbing feature range to which the second element belongs and setting the conventional clamping force as a basic clamping force;
And (3) sequencing and combining the numerical values of the basic clamping distance and the basic clamping force to obtain basic grabbing implementation parameters.
Preferably, position coordinates of a target to be grabbed relative to the mechanical arm are determined, when the clamping mechanism of the mechanical arm is contacted with two sides of the target, a positioning instruction is generated, and a front image of the contact of the clamping mechanism of the mechanical arm and the target is obtained according to the positioning instruction;
performing image processing and feature recognition on the front image, acquiring a support for supporting a target in the front image and constructing a support coordinate system;
And acquiring two first contact points and corresponding first contact point coordinates of the clamping mechanism of the mechanical arm, which are in contact with the target, according to the support coordinate system, and acquiring a first test midpoint and corresponding first test midpoint coordinates of the contact between the target and the support.
Preferably, the straight line distance between the two first contact points and the first test midpoint is calculated according to the two first contact point coordinates and the first test midpoint coordinates and marked as a first verification value Y (h), and the vertical distance between the two first contact points and the first test midpoint coordinates is calculated and marked as a second verification value E (h); the first verification value and the second verification value form a base location feature.
Preferably, traversing basic grabbing implementation parameters, controlling clamping mechanisms of the mechanical arm to clamp the target from two sides of the target according to basic clamping distances and basic clamping forces obtained through traversing, controlling the mechanical arm to lift the clamped target by utilizing a preset lifting height, and controlling the mechanical arm to pause operation after the target is in a suspended state;
shooting a suspended target to obtain a corresponding test image, processing the test image and identifying the characteristics, obtaining a second contact point and a second contact point coordinate corresponding to two contact points of a clamping mechanism of the lifted mechanical arm and the target, and obtaining a second test midpoint and a corresponding second test midpoint coordinate of the lower end of the front surface of the lifted target.
Preferably, the straight line distance between the two second contact points and the second test midpoint is calculated according to the coordinates of the two second contact points and the coordinates of the second test midpoint and marked as a first test value Y (c), and the vertical distance between the two second contact points and the second test midpoint is calculated and marked as a second test value E (c);
when the first test value and the second test value are subjected to test analysis, the implementation stability mu corresponding to the mechanical arm clamping target is obtained through formula calculation; the calculation formula for implementing the stability μ is: ; wherein w1 and w2 are proportionality coefficients greater than zero, and 2×w1=w2; h is a preset lifting height;
If mu is not equal to w1+w2, generating a control abnormality label, inputting the implementation stability obtained by calculation into a control abnormality identification model according to the control abnormality label, carrying out data analysis and outputting a corresponding implementation stability identification; wherein the output implementation stability indicator comprises a value of 1 or 2; the expression of the control abnormality recognition model is: 。
Preferably, the mechanical arm is controlled to continuously clamp the target through the preset enhancement force according to the implementation stability mark with the value of 1 and work according to the preset control flow, and the conventional clamping force associated with the corresponding target is replaced and updated according to the preset enhancement force according to the implementation stability mark with the value of 1;
And controlling the mechanical arm to descend the clamped target to the original position according to the implementation stability mark with the value of 2, respectively adjusting control parameters of the basic clamping distance and the basic clamping force, controlling the mechanical arm to repeatedly grab the target and perform stability test on the grabbed target in a grabbing state by utilizing the adjusted control parameters until the implementation stability mark of test analysis is 0 or 1, and performing data replacement update on the adjusted effective control parameters.
The invention also discloses a robot motion control device based on image recognition processing, which comprises:
The target information acquisition processing module is used for scanning and identifying basic information of a target to be grabbed by the mechanical arm through the Internet of things technology and processing and acquiring a grabbing feature sequence corresponding to the target; carrying out data analysis on the grabbing feature sequences to obtain conventional grabbing implementation parameters or basic grabbing implementation parameters corresponding to different grabbing schemes; carrying out position image recognition on a target to be grabbed through a camera device arranged on the mechanical arm, and analyzing and obtaining basic position characteristics corresponding to the target;
The object grabbing test analysis module is used for grabbing an object to be grabbed by the mechanical arm under the control of basic grabbing implementation parameters obtained through analysis, carrying out stability test on the grabbed object in a grabbing state, carrying out image processing identification on an obtained test image, and carrying out test analysis by combining with basic position characteristics;
and the target grabbing test management module is used for dynamically managing and controlling the follow-up grabbing of the grabbing targets according to the test result and dynamically managing the basic grabbing implementation parameters to which the grabbing targets belong.
Compared with the prior art, the invention has the beneficial effects that:
According to the invention, the basic information of the target to be grabbed by the mechanical arm is scanned and identified, the grabbing feature sequences corresponding to the target are obtained, the grabbing features of different aspects of the target to be grabbed are subjected to data analysis according to the grabbing feature sequences, so that the corresponding feature identification values are obtained, the different grabbing features can be digitalized, meanwhile, the data support of different dimensions can be provided for the implementation of the subsequent corresponding target grabbing scheme, and the diversity of grabbing feature processing and utilization is improved.
According to the invention, the corresponding grabbing scheme and grabbing implementation parameters are obtained by carrying out data analysis on the grabbing feature sequences, so that the grabbing scheme is selected in a self-adaptive and dynamic manner aiming at different targets, and the self-adaption and flexibility of mechanical arm identification and grabbing are improved.
According to the invention, through carrying out position image recognition on the target to be grabbed and analyzing and acquiring the basic position characteristics corresponding to the target, and utilizing the basic grabbing implementation parameters acquired through analysis to control the mechanical arm to grab the target to be grabbed, the stability test of the grabbing state is carried out on the grabbed target, so that whether the grabbing state of the basic grabbing implementation parameters to control the target is normal or not can be acquired, the follow-up grabbing parameters of the target can be automatically shared and updated according to the stability test result, and the autonomous test capability and the autonomous perfecting capability of the mechanical arm for grabbing different targets are improved.
According to the invention, the subsequent grabbing of the grabbing target is dynamically controlled according to the test result, and the basic grabbing implementation parameters of the grabbing target are dynamically managed, so that the automatic implementation grabbing test adjustment and the self-optimization updating of grabbing test data of the mechanical arm for targets with incomplete different control parameters are realized, and the automatic learning effect of the motion control of the mechanical arm and the overall effect of the expansion and optimization of control data are improved.
Drawings
The invention is further described below with reference to the accompanying drawings.
Fig. 1 is a block flow diagram of a robot motion control method based on image recognition processing according to the present invention.
Fig. 2 is a block diagram of a robot motion control device based on image recognition processing according to the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Example 1: as shown in fig. 1, the present invention is a robot motion control method based on image recognition processing, including:
the method comprises the steps of scanning and identifying basic information of a target to be grabbed by a mechanical arm through the internet of things technology, and processing and obtaining a grabbing feature sequence corresponding to the target; comprising the following steps:
The internet of things technology can be a radio frequency identification technology, identification and information acquisition are carried out through a radio frequency tag installed on the outer surface of a target, and basic information comprises the number, the length, the width, the height, the weight, the protection type and the protection identification of the target; the targets can be logistics cargoes with different shapes and to be transported;
wherein the protection type comprises a hard protection type and a soft protection type;
The hard protection type can be a carton type protection outer package, and the soft protection type can be a plastic type protection outer package; the system is used for providing data support in the aspect of external protection for the dynamic adjustment of the target clamping of different protection types by the subsequent mechanical arm;
Extracting the values of the length, the weight and the protection mark and marking the values as a first grabbing feature, a second grabbing feature and a third grabbing feature respectively; the protection identifier is used for digitally representing and identifying different protection types, specific numerical values are not limited, and the protection identifier can be customized according to actual conditions;
sequencing and combining the marked plurality of grabbing features to obtain a grabbing feature sequence;
According to the embodiment of the invention, the target to be grabbed by the mechanical arm is scanned and identified to obtain the basic information and the grabbing feature sequence corresponding to the target, so that reliable target digital data support can be provided for automatic identification and analysis of different grabbing schemes implemented by the subsequent mechanical arm.
Carrying out data analysis on the grabbing feature sequences to obtain conventional grabbing implementation parameters or basic grabbing implementation parameters corresponding to different grabbing schemes; comprising the following steps:
Sequentially acquiring different elements in the grabbing feature sequence and marking the elements as ai, i=1, 2 and 3;
Sequentially inputting the elements obtained by the marking into a feature recognition model for data analysis and outputting a feature recognition value; the feature recognition model is constructed based on an isolated forest algorithm and combining all standard grabbing feature ranges; wherein, the characteristic recognition value comprises a numerical value of 0 or 1, which respectively represents that the corresponding element is in historical existence or in historical nonexistence;
The expression of the feature recognition model is: ; in the formula, ui is a standard grabbing feature range corresponding to different grabbing features, and is determined according to historical grabbing big data corresponding to the same target type;
the obtained feature recognition values are arranged and combined according to the output sequence to obtain a feature recognition sequence;
According to the embodiment of the invention, the corresponding feature recognition value is obtained by carrying out data analysis on the grabbing features of different aspects of the target to be grabbed, so that the different grabbing features can be digitalized, meanwhile, data support with different dimensions can be provided for the implementation of the subsequent corresponding target grabbing scheme, and the diversity of grabbing feature processing and utilization is improved.
When corresponding grabbing implementation parameters are determined according to the feature identification sequence, acquiring all elements with the value of 0 in the feature identification sequence and corresponding element positions;
If the values of all elements in the feature recognition sequence are 0, generating a conventional instruction, and acquiring a conventional clamping distance associated with a standard grabbing feature range to which a first element belongs and a conventional clamping force associated with a standard grabbing feature range to which a third element belongs according to the conventional instruction;
It is to be explained that the conventional instruction indicates that the mechanical arm grabs the targets with the same parameters in the history grabbing behavior, so that grabbing control can be performed on the targets to be grabbed through grabbing parameters which are implemented through historical grabbing; in addition, different standard grabbing feature ranges are pre-associated with a corresponding conventional clamping distance or conventional clamping force, and are determined according to grabbing big data of different historical targets;
The numerical sequences of the conventional clamping distance and the conventional clamping force are combined to obtain conventional grabbing implementation parameters;
If the value of the second element is 0 and the total number of the elements with the value of 0 is less than three in the feature recognition sequence, generating a verification instruction, setting the value of the first element as a basic clamping distance according to the verification instruction, and acquiring a conventional clamping force associated with a standard grabbing feature range to which the second element belongs and setting the conventional clamping force as a basic clamping force;
It is noted that the standard grabbing feature range to which the second element belongs is also associated with a conventional clamping force, and under the condition that the first element and the second element are effective, the conventional clamping force associated with the standard grabbing feature range to which the third element belongs is higher in priority than the conventional clamping force associated with the standard grabbing feature range to which the second element belongs; otherwise, the conventional clamping force associated with the standard grabbing feature range to which the second element belongs is used as the reference;
The numerical values of the basic clamping distance and the basic clamping force are sequenced and combined to obtain basic grabbing implementation parameters;
In the embodiment of the invention, the corresponding grabbing scheme and grabbing implementation parameters are obtained by carrying out data analysis on the grabbing feature sequences, so that the grabbing scheme is selected in a self-adaptive and dynamic manner aiming at different targets, and the self-adaption and flexibility of the mechanical arm identification and grabbing are improved.
Carrying out position image recognition on a target to be grabbed through a camera device arranged on the mechanical arm, and analyzing and obtaining basic position characteristics corresponding to the target; comprising the following steps:
Determining the position coordinates of a target to be grabbed relative to the mechanical arm, wherein the determination of the position coordinates of the target is realized based on a coordinate system constructed by the existing mechanical arm and the existing positioning technology, a positioning instruction is generated when the clamping mechanism of the mechanical arm is contacted with two sides of the target, and a front image of the contact of the clamping mechanism of the mechanical arm and the target is obtained according to the positioning instruction; the clamping mechanism is used for clamping the target from two sides;
Performing image processing and feature recognition on the front image, acquiring a support for supporting a target in the front image and constructing a support coordinate system; the support coordinate system can set the middle point of the front surface of the support as an origin, and establish the support coordinate system according to a preset coordinate interval and a preset coordinate direction, wherein the preset coordinate interval and the preset coordinate direction are determined according to an actual application scene;
it should be noted that, the image processing and feature recognition of the front image are all conventional technical means, and specific steps are not described here; the support may be a conveyor belt for intermittent transport, or other support structure;
Acquiring two first contact points and corresponding first contact point coordinates of a clamping mechanism of the mechanical arm, which are in contact with a target, according to a support coordinate system, and acquiring a first test midpoint and corresponding first test midpoint coordinates of contact between the target and a support; the contact point is the midpoint of a contact surface of the clamping mechanism of the mechanical arm, which is contacted with the two side surfaces of the target;
Respectively calculating the linear distance between the two first contact points and the first test midpoint according to the two first contact point coordinates and the first test midpoint coordinate and marking the linear distance as a first verification value Y (h), and calculating the vertical distance between the two first contact points and the first test midpoint coordinate and marking the linear distance as a second verification value E (h);
The first verification value and the second verification value form a basic position feature;
In the embodiment of the invention, the position image recognition is carried out on the target to be grabbed, the basic position characteristics corresponding to the target are obtained through analysis, so that the relative state of the target before grabbing can be digitally represented, reliable data support can be provided for the grabbing state analysis of the target after grabbing, and the diversity of the data processing of different dimensional characteristics of the target when the target is stationary is improved.
The method comprises the steps of utilizing basic grabbing implementation parameters obtained through analysis to control a mechanical arm to grab a target to be grabbed, carrying out grabbing state stability test on the grabbed target, carrying out image processing identification on an obtained test image, and carrying out test analysis by combining basic position characteristics; comprising the following steps:
Traversing basic grabbing implementation parameters, controlling clamping mechanisms of the mechanical arm to clamp the target from two sides of the target according to basic clamping distances and basic clamping forces obtained through traversing, controlling the mechanical arm to lift the clamped target by utilizing a preset lifting height, and controlling the mechanical arm to stop running after the target is in a suspended state; the specific value of the preset lifting height can be determined according to the target height and can be 1.5 times of the target height;
Shooting a suspended object to obtain a corresponding test image, wherein the shooting direction is the same as the shooting direction of a front image at the front stage, processing and feature recognition are performed on the test image, the technical scheme of processing and feature recognition is the same as that of corresponding processing of the front image, a second contact point and a second contact point coordinate corresponding to two contact points of a clamping mechanism of the lifted mechanical arm, which are contacted with the object, are obtained, and a second test midpoint and a corresponding second test midpoint coordinate of the lower end of the front surface of the lifted object are obtained;
Respectively calculating the linear distance between the two second contact points and the second test midpoint according to the two second contact point coordinates and the second test midpoint coordinate and marking the linear distance as a first test value Y (c), and calculating the vertical distance between the two second contact points and the second test midpoint and marking the linear distance as a second test value E (c);
when the first test value and the second test value are subjected to test analysis, the implementation stability mu corresponding to the mechanical arm clamping target is obtained through formula calculation; the calculation formula for implementing the stability μ is: ; wherein w1 and w2 are proportionality coefficients greater than zero, and 2×w1=w2; h is a preset lifting height;
if μ=w1+w2, then a control-complete-valid tag is generated and its associated implementation stability flag is set to 0;
if mu is not equal to w1+w2, generating a control abnormality label, inputting the implementation stability obtained by calculation into a control abnormality identification model according to the control abnormality label, carrying out data analysis and outputting a corresponding implementation stability identification; the control anomaly identification model is constructed based on an isolated forest algorithm and standard grabbing design requirement data of a target;
The output implementation stability mark comprises a numerical value of 1 or 2, and the numerical value respectively indicates that the corresponding test result is slightly abnormal or severely abnormal; a specific corresponding object is grabbed and then has slight or severe sliding;
The expression of the control abnormality recognition model is: 。
According to the embodiment of the invention, the mechanical arm is controlled to grasp the target to be grasped by utilizing the basic grasping implementation parameters obtained through analysis, and the stability test of the grasping state is carried out on the grasped target, so that whether the grasping state of the basic grasping implementation parameters is normal or not can be obtained, the follow-up grasping parameters of the target can be automatically shared and updated according to the stability test result, and the autonomous test capability and the autonomous perfecting capability of the mechanical arm for grasping different targets are improved.
According to the test result, carrying out dynamic management and control on the subsequent grabbing of the grabbing target, and carrying out dynamic management on the basic grabbing implementation parameters to which the grabbing target belongs; comprising the following steps:
Controlling the mechanical arm to continuously clamp the target according to the implementation stability mark with the value of 0 and working according to a preset control flow;
The mechanical arm is controlled to continuously clamp the target through preset enhancement force according to the implementation stability mark with the value of 1 and work according to a preset control flow, and conventional clamping force associated with the corresponding target is replaced and updated according to the preset enhancement force according to the implementation stability mark with the value of 1; the preset reinforcement force is determined according to the weight of the object, for example, according to the median or maximum value of all conventional clamping forces of other types of objects of the same weight;
and controlling the mechanical arm to descend the clamped target to the original position according to the implementation stability mark with the value of 2, respectively adjusting control parameters of the basic clamping distance and the basic clamping force through preset enhancement force and preset clamping adjustment distance, determining the clamping adjustment distance according to the median or minimum value of the conventional clamping distances corresponding to all targets with the same length and the same weight, obtaining the adjustment clamping distance and the adjustment clamping force, controlling the mechanical arm to repeatedly grab the target by using the adjustment clamping distance and the adjustment clamping force, and performing stability test on the grabbed target until the implementation stability mark of test analysis is 0 or 1, and performing data replacement update on the basic clamping distance and the basic clamping force to which the clamped target belongs by using the adjustment clamping distance and the adjustment clamping force which are implemented last time.
According to the embodiment of the invention, the subsequent grabbing of the grabbing target is dynamically controlled according to the test result, and the basic grabbing implementation parameters of the grabbing target are dynamically managed, so that the automatic implementation grabbing test adjustment and the self-optimization updating of grabbing test data of the mechanical arm for targets with incomplete different control parameters are realized, the manual intervention of the passive setting parameters for testing and adjusting can be avoided, and the automatic learning effect of the motion control of the mechanical arm and the overall effect of the expansion and optimization of the control data are improved.
Example 2: as shown in fig. 2, the invention relates to a robot motion control device based on image recognition processing, which comprises a target information acquisition processing module, a target grabbing test analysis module and a target grabbing test management module;
The target information acquisition processing module is used for scanning and identifying basic information of a target to be grabbed by the mechanical arm through the Internet of things technology and processing and acquiring a grabbing feature sequence corresponding to the target; carrying out data analysis on the grabbing feature sequences to obtain conventional grabbing implementation parameters or basic grabbing implementation parameters corresponding to different grabbing schemes; carrying out position image recognition on a target to be grabbed through a camera device arranged on the mechanical arm, and analyzing and obtaining basic position characteristics corresponding to the target;
The object grabbing test analysis module is used for grabbing an object to be grabbed by the mechanical arm under the control of basic grabbing implementation parameters obtained through analysis, carrying out stability test on the grabbed object in a grabbing state, carrying out image processing identification on an obtained test image, and carrying out test analysis by combining with basic position characteristics;
and the target grabbing test management module is used for dynamically managing and controlling the follow-up grabbing of the grabbing targets according to the test result and dynamically managing the basic grabbing implementation parameters to which the grabbing targets belong.
In addition, the formulas related in the above are all formulas for removing dimensions and taking numerical calculation, and are one formula which is obtained by acquiring a large amount of data and performing software simulation through simulation software and is closest to the actual situation.
In the several embodiments provided in the present invention, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the invention are merely illustrative, and for example, the division of modules is merely a logical function division, and other manners of division may be implemented in practice.
The modules illustrated as separate components may or may not be physically separate, and components shown as modules may or may not be physical modules, may be located in one place, or may be distributed over a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional module in each embodiment of the present invention may be integrated into one processing module, or each module may exist alone physically, or two or more modules may be integrated into one module. The integrated modules may be implemented in hardware or in hardware plus software functional modules.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the essential characteristics thereof.
Finally, it should be noted that the above-mentioned embodiments are merely for illustrating the technical solution of the present invention and not for limiting the same, and although the present invention has been described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications and equivalents may be made to the technical solution of the present invention without departing from the spirit and scope of the technical solution of the present invention.
Claims (10)
1. The robot motion control method based on image recognition processing is characterized by comprising the following steps:
the method comprises the steps of scanning and identifying basic information of a target to be grabbed by a mechanical arm through the internet of things technology, and processing and obtaining a grabbing feature sequence corresponding to the target;
carrying out data analysis on the grabbing feature sequences to obtain conventional grabbing implementation parameters or basic grabbing implementation parameters corresponding to different grabbing schemes;
Carrying out position image recognition on a target to be grabbed through a camera device arranged on the mechanical arm, and analyzing and obtaining basic position characteristics corresponding to the target;
The method comprises the steps of utilizing basic grabbing implementation parameters obtained through analysis to control a mechanical arm to grab a target to be grabbed, carrying out grabbing state stability test on the grabbed target, carrying out image processing identification on an obtained test image, and carrying out test analysis by combining basic position characteristics;
And dynamically controlling the subsequent grabbing of the grabbing target according to the test result, and dynamically managing the basic grabbing implementation parameters to which the grabbing target belongs.
2. The robot motion control method based on the image recognition process according to claim 1, wherein the basic information includes a number, a length, a width, a height, a weight, a protection type, and a protection identification of the object;
extracting the values of the length, the weight and the protection mark and marking the values as a first grabbing feature, a second grabbing feature and a third grabbing feature respectively;
and sequencing and combining the plurality of grabbing features of the mark to obtain a grabbing feature sequence.
3. The robot motion control method based on image recognition processing according to claim 2, wherein different elements in the capture feature sequence are sequentially acquired and marked as ai, i=1, 2,3; sequentially inputting the elements obtained by the marking into a feature recognition model for data analysis and outputting a feature recognition value; wherein the feature identification value comprises a value of 0 or 1;
The expression of the feature recognition model is: ; in the formula, ui is a standard grabbing feature range corresponding to different grabbing features;
And arranging and combining the acquired plurality of characteristic recognition values according to the output sequence to obtain a characteristic recognition sequence.
4. The method for controlling the motion of a robot based on image recognition processing according to claim 3, wherein when determining the corresponding grasping implementation parameters according to the feature recognition sequence, all elements with values of 0 and the positions of the corresponding elements in the feature recognition sequence are obtained;
If the values of all elements in the feature recognition sequence are 0, generating a conventional instruction, and acquiring a conventional clamping distance associated with a standard grabbing feature range to which a first element belongs and a conventional clamping force associated with a standard grabbing feature range to which a third element belongs according to the conventional instruction;
The numerical sequences of the conventional clamping distance and the conventional clamping force are combined to obtain conventional grabbing implementation parameters;
If the value of the second element is 0 and the total number of the elements with the value of 0 is less than three in the feature recognition sequence, generating a verification instruction, setting the value of the first element as a basic clamping distance according to the verification instruction, and acquiring a conventional clamping force associated with a standard grabbing feature range to which the second element belongs and setting the conventional clamping force as a basic clamping force;
And (3) sequencing and combining the numerical values of the basic clamping distance and the basic clamping force to obtain basic grabbing implementation parameters.
5. The robot motion control method based on image recognition processing according to claim 4, wherein position coordinates of a target to be grasped relative to a mechanical arm are determined, a positioning instruction is generated when clamping mechanisms of the mechanical arm are contacted with two sides of the target, and a front image of the contact of the clamping mechanisms of the mechanical arm and the target is acquired according to the positioning instruction;
performing image processing and feature recognition on the front image, acquiring a support for supporting a target in the front image and constructing a support coordinate system;
And acquiring two first contact points and corresponding first contact point coordinates of the clamping mechanism of the mechanical arm, which are in contact with the target, according to the support coordinate system, and acquiring a first test midpoint and corresponding first test midpoint coordinates of the contact between the target and the support.
6. The method according to claim 5, wherein a straight line distance between the two first contact points and the first test midpoint is calculated and marked as a first verification value Y (h) and a vertical distance between the two first contact points and the first test midpoint is calculated and marked as a second verification value E (h) according to the two first contact point coordinates and the first test midpoint coordinates, respectively; the first verification value and the second verification value form a base location feature.
7. The robot motion control method based on image recognition processing according to claim 6, wherein basic grabbing implementation parameters are traversed, clamping mechanisms of the mechanical arm are controlled to clamp the target from two sides of the target according to basic clamping distances and basic clamping forces obtained through traversing, the clamped target is lifted by the mechanical arm through a preset lifting height, and the mechanical arm is controlled to stop running after the target is in a suspended state;
shooting a suspended target to obtain a corresponding test image, processing the test image and identifying the characteristics, obtaining a second contact point and a second contact point coordinate corresponding to two contact points of a clamping mechanism of the lifted mechanical arm and the target, and obtaining a second test midpoint and a corresponding second test midpoint coordinate of the lower end of the front surface of the lifted target.
8. The method according to claim 7, wherein a straight line distance between the two second contact points and the second test midpoint is calculated and marked as a first test value Y (c) and a vertical distance between the two second contact points and the second test midpoint is calculated and marked as a second test value E (c) according to the two second contact point coordinates and the second test midpoint coordinates, respectively;
when the first test value and the second test value are subjected to test analysis, the implementation stability mu corresponding to the mechanical arm clamping target is obtained through formula calculation; the calculation formula for implementing the stability μ is: ; wherein w1 and w2 are proportionality coefficients greater than zero, and 2×w1=w2; h is a preset lifting height;
If mu is not equal to w1+w2, generating a control abnormality label, inputting the implementation stability obtained by calculation into a control abnormality identification model according to the control abnormality label, carrying out data analysis and outputting a corresponding implementation stability identification; wherein the output implementation stability indicator comprises a value of 1 or 2; the expression of the control abnormality recognition model is: 。
9. The robot motion control method based on image recognition processing according to claim 8, wherein the robot arm is controlled to continuously clamp the target through a preset enhancement force according to an implementation stability identification with a value of 1 and work according to a preset control flow, and the conventional clamping force associated with the corresponding target is replaced and updated according to the preset enhancement force according to the implementation stability identification with the value of 1;
And controlling the mechanical arm to descend the clamped target to the original position according to the implementation stability mark with the value of 2, respectively adjusting control parameters of the basic clamping distance and the basic clamping force, controlling the mechanical arm to repeatedly grab the target and perform stability test on the grabbed target in a grabbing state by utilizing the adjusted control parameters until the implementation stability mark of test analysis is 0 or 1, and performing data replacement update on the adjusted effective control parameters.
10. A robot motion control apparatus based on an image recognition process, to which the robot motion control method based on an image recognition process according to any one of claims 1 to 9 is applied, comprising:
The target information acquisition processing module is used for scanning and identifying basic information of a target to be grabbed by the mechanical arm through the Internet of things technology and processing and acquiring a grabbing feature sequence corresponding to the target; carrying out data analysis on the grabbing feature sequences to obtain conventional grabbing implementation parameters or basic grabbing implementation parameters corresponding to different grabbing schemes; carrying out position image recognition on a target to be grabbed through a camera device arranged on the mechanical arm, and analyzing and obtaining basic position characteristics corresponding to the target;
The object grabbing test analysis module is used for grabbing an object to be grabbed by the mechanical arm under the control of basic grabbing implementation parameters obtained through analysis, carrying out stability test on the grabbed object in a grabbing state, carrying out image processing identification on an obtained test image, and carrying out test analysis by combining with basic position characteristics;
and the target grabbing test management module is used for dynamically managing and controlling the follow-up grabbing of the grabbing targets according to the test result and dynamically managing the basic grabbing implementation parameters to which the grabbing targets belong.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410993058.4A CN118544358A (en) | 2024-07-24 | 2024-07-24 | Robot motion control method and device based on image recognition processing |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202410993058.4A CN118544358A (en) | 2024-07-24 | 2024-07-24 | Robot motion control method and device based on image recognition processing |
Publications (1)
Publication Number | Publication Date |
---|---|
CN118544358A true CN118544358A (en) | 2024-08-27 |
Family
ID=92453744
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410993058.4A Pending CN118544358A (en) | 2024-07-24 | 2024-07-24 | Robot motion control method and device based on image recognition processing |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN118544358A (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108349083A (en) * | 2015-11-13 | 2018-07-31 | 伯克希尔格雷股份有限公司 | The sorting system and method for sorting for providing various objects |
US20190138905A1 (en) * | 2017-11-03 | 2019-05-09 | Drishti Technologies, Inc. | Traceability systems and methods |
CN111267086A (en) * | 2018-12-04 | 2020-06-12 | 北京猎户星空科技有限公司 | Action task creating and executing method and device, equipment and storage medium |
US20200238519A1 (en) * | 2019-01-25 | 2020-07-30 | Mujin, Inc. | Robotic system control method and controller |
CN111975783A (en) * | 2020-08-31 | 2020-11-24 | 广东工业大学 | Robot grabbing detection method and system |
CN112512754A (en) * | 2018-08-13 | 2021-03-16 | Abb瑞士股份有限公司 | Method for programming an industrial robot |
CN117325170A (en) * | 2023-10-27 | 2024-01-02 | 武汉工程大学 | Method for grabbing hard disk rack based on depth vision guiding mechanical arm |
-
2024
- 2024-07-24 CN CN202410993058.4A patent/CN118544358A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108349083A (en) * | 2015-11-13 | 2018-07-31 | 伯克希尔格雷股份有限公司 | The sorting system and method for sorting for providing various objects |
US20190138905A1 (en) * | 2017-11-03 | 2019-05-09 | Drishti Technologies, Inc. | Traceability systems and methods |
CN112512754A (en) * | 2018-08-13 | 2021-03-16 | Abb瑞士股份有限公司 | Method for programming an industrial robot |
CN111267086A (en) * | 2018-12-04 | 2020-06-12 | 北京猎户星空科技有限公司 | Action task creating and executing method and device, equipment and storage medium |
US20200238519A1 (en) * | 2019-01-25 | 2020-07-30 | Mujin, Inc. | Robotic system control method and controller |
CN111975783A (en) * | 2020-08-31 | 2020-11-24 | 广东工业大学 | Robot grabbing detection method and system |
CN117325170A (en) * | 2023-10-27 | 2024-01-02 | 武汉工程大学 | Method for grabbing hard disk rack based on depth vision guiding mechanical arm |
Non-Patent Citations (1)
Title |
---|
侯红英;高甜;李桃: "图像分割方法综述", 电脑知识与技术, vol. 15, no. 005, 31 December 2019 (2019-12-31) * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108399639B (en) | Rapid automatic grabbing and placing method based on deep learning | |
US11701777B2 (en) | Adaptive grasp planning for bin picking | |
CN109500809B (en) | Optimization of an automated process for selecting and grasping objects by a robot | |
CN110666805A (en) | Industrial robot sorting method based on active vision | |
CN108748149B (en) | Non-calibration mechanical arm grabbing method based on deep learning in complex environment | |
CN112561886A (en) | Automatic workpiece sorting method and system based on machine vision | |
CN115213896A (en) | Object grabbing method, system and equipment based on mechanical arm and storage medium | |
CN114029243B (en) | Soft object grabbing and identifying method for sorting robot | |
CN110888348B (en) | Robot stacking control method and robot stacking control system based on laser SLAM | |
CN114758236A (en) | Non-specific shape object identification, positioning and manipulator grabbing system and method | |
CN114029951A (en) | Robot autonomous recognition intelligent grabbing method based on depth camera | |
CN114405866A (en) | Vision-guided steel plate sorting method, vision-guided steel plate sorting device and system | |
CN118544358A (en) | Robot motion control method and device based on image recognition processing | |
CN113715012B (en) | Automatic assembling method and system for remote controller parts | |
CN112989881A (en) | Unsupervised migratable 3D visual object grabbing method | |
CN116214531B (en) | Path planning method and device for industrial robot | |
CN114074331A (en) | Disordered grabbing method based on vision and robot | |
CN114266822B (en) | Workpiece quality inspection method and device based on binocular robot, robot and medium | |
CN114405865B (en) | Visual guide steel plate sorting method, visual guide steel plate sorting device and system | |
Tian et al. | Optimal Path Planning for a Robot Shelf Picking System | |
CN111470244B (en) | Control method and control device for robot system | |
EP1569776A1 (en) | Method and arrangement to avoid collision between a robot and its surroundings while picking details including a sensorsystem | |
CN113146642B (en) | Mechanical arm control method and device for oil tank plate machining and intelligent terminal | |
CN112171664A (en) | Production line robot track compensation method, device and system based on visual identification | |
CN115837985B (en) | Disordered grabbing method based on machine vision |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |